Cosine Similarity
You are building a semantic search engine that compares document embeddings. To find how similar two documents are, you compute the cosine similarity between their embedding vectors — a value between -1 and 1 where 1 means identical direction, 0 means orthogonal, and -1 means opposite.
Given two 1D vectors a and b of equal length, return their cosine similarity:
cosine_similarity(a, b) = dot(a, b) / (norm(a) * norm(b))
If either vector is all zeros (no magnitude), return 0.0.
Round the result to 6 decimal places.
Example 1
([1, 2, 3], [100, 200, 300])1.0b is just a scaled version of a — same direction. Cosine similarity measures angle not magnitude, so scaling has no effect.
Example 2
([1, 2, 3], [-1, -2, -3])-1.0Opposite direction vectors. dot product is negative, norms are equal, result is -1.0.
Example 3
([1, 1], [1, 0])0.70710745-degree angle between vectors. dot(a,b)=1, norm(a)=√2, norm(b)=1. Result: 1/√2 ≈ 0.707107.
- ›1 <= len(a) == len(b) <= 1000
- ›-1000 <= a[i], b[i] <= 1000
Reference solution available after you attempt the question.
Ready to solve it?
Start a session on Mockbit #70. Write your code, run it against hidden tests, and get graded with specific critique on each axis.