Researchers from the Human Sensing Laboratory at Carnegie Mellon University (CMU) have published a paper on DensePose From WiFi, an AI model which can detect the pose of multiple humans in a room using only the signals from WiFi transmitters. In experiments on real-world data, the algorithm achieves an average precision of 87.2 at the 50% IOU threshold.
What is the DensePose From WiFi model?
The DensePose From WiFi model is an AI system created by researchers at Carnegie Mellon University that detects the poses of multiple humans in a room using only WiFi signals. It processes data from three different WiFi transmitters to create a feature map, which is then analyzed by a neural network to produce UV maps of human body surfaces, allowing for the localization and pose detection of multiple individuals.
How accurate is the model?
In experiments conducted with real-world data, the DensePose From WiFi model achieved an average precision of 87.2 at the 50% Intersection over Union (IOU) threshold, indicating a strong capability in accurately detecting human poses.
What are the future plans for this technology?
The researchers plan to enhance the model by collecting multi-layout data and extending its capabilities to predict 3D human body shapes from WiFi signals. This advancement aims to position WiFi devices as cost-effective, privacy-friendly sensors compared to traditional RGB cameras and LiDAR systems.