Mockbit/#76
ML CodinghardNumpy~25m

Gradient Descent Step

Problem

You are implementing the core optimization step for training a linear regression model. At each training iteration, gradient descent computes how much each weight contributed to the prediction error, then adjusts the weights in the direction that reduces error.

Given a feature matrix X (n_samples × n_features), target vector y (n_samples,), weight vector w (n_features,), and learning rate alpha, perform one gradient descent update:

  1. predictions = X @ w
  2. errors = predictions - y
  3. gradient = X.T @ errors / n_samples
  4. new_w = w - alpha * gradient

Return new_w rounded to 6 decimal places as a list.

Examples

Example 1

Input: ([[1, 2], [3, 4]], [3, 7], [1, 1], 0.1)
Output: [1.0, 1.0]

Predictions [3,7] exactly match targets. Errors are [0,0]. Gradient is [0,0]. Weights unchanged.

Example 2

Input: ([[1]], [10], [0], 0.1)
Output: [1.0]

Prediction=0, error=-10, gradient=X.T@[-10]/1=-10. w_new=0-0.1*(-10)=1.0. Weight increases toward target.

Example 3

Input: ([[2], [3]], [4, 6], [0], 0.5)
Output: [6.5]

Predictions [0,0], errors [-4,-6], gradient=(2*(-4)+3*(-6))/2=-13. w_new=0-0.5*(-13)=6.5.

Constraints
  • 2 <= n_samples <= 100
  • 1 <= n_features <= 20
  • -100 <= X[i][j], y[i], w[j] <= 100
  • 0 < alpha <= 1.0
Reference solution

Reference solution available after you attempt the question.

Ready to solve it?

Start a session on Mockbit #76. Write your code, run it against hidden tests, and get graded with specific critique on each axis.

Related ML Coding questions
← Back homemockbit.io/q/76
PrivacyTerms© 2026 Mockbit