The simplest way is to use either an "integrate & fire" (IF) neuron (or variants of it like "leaky integrate & fire" -LIF- or "stochastic IF/LIFs).
Another very typical route is to use a Poisson process and set the instantaneous probability of generating spikes to a level determined by your real-valued features.
These terms (IF, LIF, Poisson process) should be good starting points for you to search further if you require more information.
The basic principle of converting ANNs into SNNs is that firing rates of spiking neurons should match the graded activations of analog neurons. Cao et al. (2015) first suggested a mechanism for converting (ReLU) activations, but a theoretical groundwork for this principle was lacking. Here we present an analytical explanation for the approximation, and on its basis we are able to derive a simple modification of the reset mechanism following a spike, which turns each SNN neuron into an unbiased approximator of the target function (Rueckauer et al., 2016).