Having trouble finding the papers referenced in the podcast. But sounds like Jesse Cook did studies on commercially available sleep trackers to see how they compare to actual sleep studies (with perhaps an undrepresentative sample of the population). In the last couple of years they went from having a 30% corrolation with REM sleep to 60%
17:01 So that’s encouraging, very encouraging in many aspects. And it’s gotten to a point, Jeff, where these devices from my data have actually seemingly suggest better performance than clinical actigraphs.
So for my studies people, patients are in our sleep center, Wisconsin Sleep, they’re undergoing a full clinical polysomnographic evaluation. At the same time these individuals are wearing a consumer sleep tracker or wearable on their non dominant wrist. In some of my designs I also have a clinical actigraph as well so we can make comparisons there.
30:39 When I first looked at the multisensory device, my 2018 paper, its sensitivity, so the ability of this device to detect true PSG labelled information, whether it be light sleep, whether it be deep sleep, whether it be REM sleep was very, very poor.
To give an example, that device could only reliably detect REM sleep 30% of the time in congruence with PSG. So that’s not very good. That’s worse than a coin toss.
Jesse Cook: 31:31 But basically the newer models have gotten better. And that may have to do with more attention to their algorithms. It’s hard to really say. Improvements in integrating the heart rate sensors, who really knows, but that REM sensitivity is now up to about 65% in some of the more advanced models, which is encouraging.
Sleep specificity is what it’s called, the ability to detect true wake and right now for most of the models that are wrist worn and fundamentally accelerometer based, even with the heart rate, at best, you’re looking at a 40% ability in that regard. So very poor.