Mockbit/#69
ML CodingmediumNumpy~15m

Softmax

Problem

You are implementing the output layer of a neural network classifier. The final layer produces raw scores called logits, which must be converted to probabilities using the softmax function.

Given a 1D array of logits, return the softmax probabilities using the numerically stable implementation:

softmax(x)[i] = exp(x[i] - max(x)) / sum(exp(x - max(x)))

Subtracting the maximum before computing exp prevents overflow for large logit values. Round each output value to 6 decimal places. Return as a Python list.

Examples

Example 1

Input: [1, 2, 3]
Output: [0.090031, 0.244728, 0.665241]

Larger logits get larger probabilities. After max subtraction: exp([-2,-1,0])/sum = softmax.

Example 2

Input: [1000, 1000]
Output: [0.5, 0.5]

Without max subtraction, exp(1000) overflows to inf. Stable version shifts to [0,0] → [0.5, 0.5].

Example 3

Input: [100, 0, 0]
Output: [1.0, 0.0, 0.0]

One dominant logit saturates the distribution. exp(0)/exp(100) rounds to 0.0 at 6 decimal places.

Constraints
  • 1 <= len(logits) <= 1000
  • -1000 <= logits[i] <= 1000
Reference solution

Reference solution available after you attempt the question.

Ready to solve it?

Start a session on Mockbit #69. Write your code, run it against hidden tests, and get graded with specific critique on each axis.

Related ML Coding questions
← Back homemockbit.io/q/69
PrivacyTerms© 2026 Mockbit