Monday, September 25, 2023

Q&A: Google on developing Pixel Watch’s fall detection capabilities, part one -Dlight News

tech giant Google announced in March that it had added fall detection capabilities to its Pixel Watch, which use sensors to determine if a user has taken a hard fall.

If the watch doesn’t detect any movement from a user for about 30 seconds, it will vibrate, sound an alarm and prompt the user to choose if they’re okay or if they need help. The watch notifies the emergency services if there is no response after one minute.

In the first part of our two-part series, Edward Shi, product manager in the Android and Pixel personal security team at Google, and Paras Unadkat, product manager and Fitbit product lead for wearable health/fitness sensing and machine learning at Google sat down with Mobile Health News to discuss the steps they and their teams have taken to develop Pixel’s fall detection technology.

Mobile Health News: Can you tell me about the development process of fall detection?

Paras Unadkat: It was definitely a long journey. We started this a few years ago and the first question was: how do we even think about collecting a data set and understanding pure sales from a motion sensor perspective? What does a fall look like?

To achieve this, we consulted with a large number of experts who worked in different university laboratories in different places. We discussed the mechanics of a fall. What is biomechanics? What does the human body look like? What are the reactions when someone falls?

We collected a lot of data in controlled environments, like induced falls, people strapped into seat belts and just loss of balance and just saw what that looked like. That kind of blew us away.

And we were able to start that process and build that first dataset to really understand what falls look like and really break down how we actually think about detecting falls data and how we analyze it.

Additionally, over a number of years, we began extensive data collection, collecting sensor data from individuals engaged in other non-fall activities. The big thing is the distinction between what is a fall and what isn’t.

And then during the development process we also had to figure out how to actually verify that the thing works? One thing we did was we actually traveled to Los Angeles and worked with a stunt crew and a bunch of people just took our finished product and tested it and basically used that to validate in all these different activities that people were actually participating in falls.

And they were trained professionals, so they didn’t hurt themselves as a result. We could actually see all these different types of things. That was really cool to see.

MHN: So you’ve been working with stunt performers to actually see how the sensors work?

Unadkat: Yes we have got that. So we had, so to speak, many different types of falls that we had people carry out and simulate. And on top of the rest of the data that we collected, that kind of gave us confirmation that we were actually able to see how this thing worked in real-world situations.

MHN: How can you tell the difference between someone who is playing with their child on the floor and slams their hand on the floor or something similar and actually takes a bad fall?

Unadkat: So there are different ways we do that. We use sensor fusion between a few different types of sensors on the device, including the barometer, which can actually detect changes in altitude. So when you fall, you go from one certain level to another and then you land on the ground.

We can also tell when a person stands still and lies there for a period of time. So that kind of affects our output, “Okay, this person was moving, and suddenly they had a hard impact and they stopped moving.” They probably took a bad fall and probably needed help.

We’ve also collected large datasets of people doing the types of things we’ve talked about, such as B. Free-living activities throughout the day without suffering falls. We put that into our machine learning model from these huge pipelines that we built to capture all this data and analyze everything. And that along with the other datasets of actual hard, high-impact falls, we can actually use these to distinguish between these types of events.

MHN: Does the pixel continuously collect data for Google to see how it performs in the real world to improve it?

Unadkat: We have an opt-in option for future users where they know we’ll get data from their devices when they opt-in if they get a fall alert. We will be able to take this data, incorporate it into our model, and improve the model over time. However, as a user, you would have to manually access it and tap “I want you to do this”.

MHN: But when people do it, it keeps getting better.

Unadkat: Yes, exactly. That’s the ideal. But we are constantly trying to improve all these models. And even internally, we continue to collect data, continue to iterate and validate it, increasing the number of use cases we can detect, increasing our overall coverage, and reducing the nature of false positive rates.

MHN: And Edward, what role did you play in developing the fall detection features?

Edward Shi: By working with Paras on all the hard work he and his team have already put in, our Android Pixel security team is essentially focused on ensuring users’ physical well-being is protected. So there was a great synergy. And one of the features that we introduced before was car crash detection.

And so they are very similar in many ways. In particular, when an emergency event is detected, a user may not be able to get help themselves, depending on whether they are unconscious or not. Then how do we escalate this? And then, of course, it ensures that false alarms are minimized. In addition to all the work the Paras team has already done to ensure we minimize false positives, how do we experience minimizing this false positive rate?

This is how we report to the user, for example. We have a countdown. We have haptics and then also an alarm sound, all the UX, the user experience that we designed there. And then, of course, when we actually call emergency services, especially if the user is unconscious, how do we convey the necessary information for an emergency call recipient to understand what’s going on and then send the right help for that user? And that’s the work our team has done.

And then we also worked with emergency services to test what our validation flow should be. Hey, do we provide you with the necessary information for the triage? Do you understand the information? And would it be helpful to them at an actual fall event and we would have made the call for the user?

MHN: What kind of information could you collect from the watch to relay to the emergency services?

Shi: Essentially, where we come in is that the entire algorithm has already done its nice job and says, “Okay, we’ve detected a bad fall. Then, in our user experience, we don’t make the call until we’ve given the user an opportunity to cancel it and say, “Hey, I’m fine.” So in this case, we assume the user was unconscious and fell, or in this case, didn’t respond.

So when we make the call, we’re actually providing context to say, “Hey, the Pixel Watch has detected a possible serious fall.” The user hasn’t responded, so we can share that context as well, and then specifically, that’s the user’s location. That’s why we keep it pretty concise because we know that concise and concise information is optimal for them. But if they have the context that the fall happened and the user may have been unconscious and the location they are at, hopefully they can send help to the user quickly.

MHN: How long did the development take?

Unadkat: I’ve been working on this for four years. Yes, it’s been a while. It started some time ago. And, you know, we had initiatives within Google long before that to kind of understand the space and collect data and stuff like that, but with that initiative it ended up getting a little bit smaller and started to grow in scale.

In the second part of our series, we look at the challenges teams faced during the development process and what future iterations of the Pixel Watch might look like.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
3,871FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles