Softmax
You are implementing the output layer of a neural network classifier. The final layer produces raw scores called logits, which must be converted to probabilities using the softmax function.
Given a 1D array of logits, return the softmax probabilities using the numerically stable implementation:
softmax(x)[i] = exp(x[i] - max(x)) / sum(exp(x - max(x)))
Subtracting the maximum before computing exp prevents overflow for large logit values. Round each output value to 6 decimal places. Return as a Python list.
Example 1
[1, 2, 3][0.090031, 0.244728, 0.665241]Larger logits get larger probabilities. After max subtraction: exp([-2,-1,0])/sum = softmax.
Example 2
[1000, 1000][0.5, 0.5]Without max subtraction, exp(1000) overflows to inf. Stable version shifts to [0,0] → [0.5, 0.5].
Example 3
[100, 0, 0][1.0, 0.0, 0.0]One dominant logit saturates the distribution. exp(0)/exp(100) rounds to 0.0 at 6 decimal places.
- ›1 <= len(logits) <= 1000
- ›-1000 <= logits[i] <= 1000
Reference solution available after you attempt the question.
Ready to solve it?
Start a session on Mockbit #69. Write your code, run it against hidden tests, and get graded with specific critique on each axis.