As far as spying on a person's thoughts is concerned, this is not much better than making a video of what they are looking at. I'm confident our inner thoughts will be private for the foreseeable future.
Presenting images to the retina and making recordings from cells in the LGN (lateral geniculate nucleus) only shows that there is a map of the visual field in the LGN. This is the first of a series of maps at progressively later points in the visual pathway. It's nice to be able to make realtime recordings from it, but its existence is certainly not news.
For those who don't know, the LGN is the first point after the retina in the visual pathway. Retinal neurons project to the LGN where they synapse on neurons which project to primary visual cortex at the back of the brain. Other cells then project from there to other areas of visual cortex. Other sensory pathways also have their first synapses in parts of the thalamus, hence the idea that the thalamus is responsible for integration of sensory input.
At each of the stages of the visual pathway, electrical probing reveals maps of the visual field. Stimuli at different places within the visual field cause electrical activity at corresponding points in the map. Each map has a different layout and responds differently to stimuli in the visual field.
The further along the visual pathway you get, the more complex the relationship between the pattern of electrical activity in the map and the actual stimulus becomes. This appears to indicate that the visual signal is being transformed in various ways at each stage of the visual pathway. At later stages, the signal has been transformed a number of times, so the relationship between the result and the original signal is complex. (Some describe this as progressively higher levels of processing - I prefer to describe it as successive transformations of a signal.)
The LGN is not considered to be just a passive relay. Some transformation of the visual signal takes place there. However, it is the first relay after the retina, so sticking electrodes into it and claiming you are seeing "what raw experience looks like" is going a bit far, in my opinion. It sounds a bit like finding out how to unscrew the lens of a camera and claiming you therefore understand photography.
Even taking realtime recordings from the LGN and all of the later maps, massively impressive though it would be, wouldn't convince me that we understand visual experience. Not until we figure out what those transformations actually are and what they are for will we be able to realistically make that claim.
Making realtime recordings of the first map gets us one step closer to understanding the visual system, but let's not get carried away.