Its simply a classification technique. The class of any unknown element can be determined through knowing the class of most nearest neighbors. This "some neighbors" is not a fixed number may be 2, 3, 4,.... it depends on the type of the application, So, its denoted by k. that is why called knn= k - of nearest neighbors.
K-nn mostly gets the count of the closest k members from groups. For example in a 2D space where each axis is a feature (lets say weight and height of the people) when you want to tag people (lets say to their sex) you get the closest k members (lets say depending on the weight and height k people close to your test, most of them are female so you decide to tag the new test case as female). Most of the time it is better to use an odd number (like 1,3,5) to avoid the equality cases. Also you can supply some distance functions so instead of using just the number of closest members, you can also give weights to their distances. In this way a much more close sample might have a higher weight than 2 or 3 far away samples. You can also read an application of K-nn in this paper. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&tp=&arnumber=6578840