April 23, 2024

00:33:44

The Future of Baby Monitoring with Sarah Ostadabbas

The Future of Baby Monitoring with Sarah Ostadabbas
The Inventivity Pod
The Future of Baby Monitoring with Sarah Ostadabbas

Apr 23 2024 | 00:33:44

/

Show Notes

This week on the show, host Richard Miles explores the possibilities of parenting technology with Sarah Ostadabbas, an associate professor of electrical and computer engineering at Northeastern University. Sarah isn’t just talking theory – she’s built a revolutionary baby monitor called AI Wover. Tune in to hear how AI Wover goes beyond simple monitoring, the key distinctions between AI and augmented cognition, and why Sarah believes mentors are crucial for empowering female engineers. Tech enthusiasts and parents won’t want to miss this groundbreaking discussion on AI and child development.

“The thing that is very important for me as the lead founder of AI Wover is bringing equity to the hand of everybody, every household with an infant on their day watch. This baby monitoring system, without extra cost, can be equipped with advanced AI and watch out if the babies are going through every milestone on time. And if not, then that concern that they have can be brought up to the pediatrician, and then hopefully the chain of action could happen on time.”                                                                                                                      – Sarah Ostadabbas

Our Smart Home series dives deep into the brains behind the innovations, exploring how biometric data, wireless powerless technology, and AI are shaping the homes of tomorrow. Tune in and get ready to reimagine how you live!

View Full Transcript

Episode Transcript

[00:00:01] Speaker A: Inventivity. What does it mean? The state of being inventive, creating or designing new things or thoughts? Hello, I'm Richard Miles. Welcome to the inventivity pod. Join us as we speak to inventors, entrepreneurs, and visionaries who are using inventivity to change the world. They will bring us alongside their journey as they share their personal stories from start to finish, including the triumphs, the failures, and everything in between. [00:00:31] Speaker B: Hi, I'm Richard Miles and welcome to our Smart Home series. For this limited series, we talked to a subject matter expert and a couple of inventors to give our audience a look into the growing smart home industry. And today's guest is doctor Sarah Ostedabas, an associate professor in the electrical and computer engineering department at Northeastern University, or NU, in Boston. And she joined NU in 2016 after completing her postdoctoral research at Georgia Tech following the achievement of her PhD at the University of Texas at Dallas in 2014. And in 2008 she received a master's in control engineering at Sharif University in Tehran, as well as two bachelor's degrees in electrical engineering at the Amir Kabir University of Tech in Tehran, where she was born and raised. While at NU, Professor Ostedabis is a director at the Augmented Cognition Laboratory and also the co director at the center for Signal Processing, Imaging, reasoning and Learning. Her research there focuses on the convergence of computer vision and machine learning, particularly emphasizing representation learning and visual perception problems. Doctor Estebas has co authored over 120 peer reviewed journal and conference articles. She's received research awards from the National Science foundation, the Department of Defense, Sony, Mathworks, Amazon, Verizon, Oracle, Biogen, and Nvidia. And we are most proud of the fact that she was a finalist in the 2023 Cade Prize for Innovation for an invention called AI wover, a first of its kind AI guided, cloud based baby monitoring system, which we will be discussing on the show today. So welcome to the show, Doctor Osterdabas. [00:02:16] Speaker C: Thank you for having me. [00:02:17] Speaker B: You have a very impressive resume. I just want to make sure it's okay if I call you Sarah, right? [00:02:21] Speaker C: Perfectly fine. Yeah, it's very hard, actually, to pronounce my last name, so. Sarah is perfectly fine. [00:02:26] Speaker B: Worried about, so. Well, thank you. You have a very interesting background, and we're going to get into that. But before that, I thought maybe let's start by describing for listeners what AI wover is, which was your entry for the cade prize. What problem is it trying to solve? What does it do and how does it work? [00:02:43] Speaker C: So AI Woover stands for AI watching over your baby. So I am trying to redefine baby monitoring industry by bringing actually advanced AI technology to the market, into this specific market, by providing not only just monitoring the baby when they are out of the site, but also providing safety and comprehensive analysis of every movement in the creep or when they are in the camera field of view. We not only analyze the baby's movement, poses, posture, the specific symmetry that we call movement that they are taking, but also want to make sure that the babies are safe. Unfortunately, 10,000 kids, they are rushed to the ER every year, and 1% of them sadly die because of crib injuries and whatever there is out there, the baby monitoring system that they are out there, and a lot of parents rely on them. They are only motion trigger. They don't go beyond that and they miss these important accidents that can be prevented. Also, we want to also go one step further by offering not only a small alert, but also conscious summaries for both parents and a physician. The thing that on that part is very important for me as the lead founder of AI Bower is bringing equity to the hand of everybody, every household that they have an infant on their day watch. This baby monitoring system without extra cost, can be equipped with advanced AI. And watch out if the babies are, are going through every milestone on time. And if not, then that the concern that they can brought up to the pediatrician, and then hopefully the chain of action could happen on time. So this is the whole idea behind AI mobile. I want to bring my background in computer vision, machine learning and artificial intelligence to the field to be able to analyze this video data in an intelligent and in an automatic way. [00:04:40] Speaker B: So just I'm understanding this clearly, Sarah, how this works. I have three kids. They're all grown now. So they didn't have baby monitor at all when our kids were little. But I do have grandkids now. And so my daughter has a baby monitor, obviously, and provides the video and the noise and everything. But in this case, it goes a step further, as you said. So it's not only sort of watching their actual movements, but it's analyzing them. Right. Give us an example of a problem or a condition or an issue that AI Wolver would pick up and say, hey, there's something going on here versus a traditional just camera and audio. [00:05:18] Speaker C: Richard, let me actually, you brought up a good, good thing. Yes, you may didn't have a baby monitor in hand when you had your kids. But we say that for every kids, we need a village to raise. As we are growing, the more recent families, they don't have a lot of family members around, a lot of help around and both parents are working, so we need help and this is the AI help and I want to bring that to the picture. So the first point is that more and more families, they are using baby monitoring system to help them to watch the baby or get a chance to see the baby while they are not exactly on their site or the field of view. But you are right, I want to do with AI over not only watching, I mean, just do the basic baby monitoring, but a better way, a smarter way. But also the question is, is it possible to pick up some early sign of neurodevelopmental disorders, such as congenital turtle colis, such as autism spectrum disorder, such as cerebral palsy early on when the diagnosis is not very much obvious and you need the long term tracking of this diagnosis, which is somehow very much impractical because nobody is watching the baby for long hours of the time, especially medical professional, and pick up those very subtle signs using AI algorithms. And in fact, I received my National Science Foundation Career award last year. Year to be able to find early signs of autism, a spectrum disorder from the crib movement. This is a collaborative work because the disclaimer here, I have no background in the medical science or physical therapy on that sense, but I collaborate with a lot of psychologists, clinical experts, pediatric physical therapists, and they are telling that they are le sign, motor function sign from movement that we can pick up early. And we are training this AI model to pick up this early sign as early as possible. One other thing is while we know that there are specific neurodevelopmental disorder out there can be detected down the line. And the average age of autism detection is around four, four and a half year old. But the brain, the baby's brain have a lot of neuroplasticity. So the earlier a lot of intervention start, the better outcome and the higher quality of life we are seeing for these kids. So we are hoping that we make stride in detecting early and starting their intervention earlier. [00:07:46] Speaker B: I find this absolutely fascinating. So it suggests that, well, first of all, I guess one follow up question, do you already have a database? Are you building a database? What? A baby that is not suffering from autism or some sort of neural issue. And then against that you compare the movement of a baby that does. And is that how the algorithm flags like this could be a problem, the way they've got their, I don't know, hands or feet or head positioned in the crib. Is that how it works or have you had the other practitioners, medical practitioners, define for you? Okay, well, this is what we see in a baby that possibly might be in the spectrum as early as a few months old. [00:08:28] Speaker C: You are asking very, very correct questions here. The first question about the data, you're right for this AI algorithm. If you have heard a lot of them or all of them these days, they are based on deep learning models. These are very data hungry models. Them, the examples of what they have to detect, you have to teach them by data. And collecting data is the part that is the most difficult part of that, because the data coming from infants, we have their privacy, security, all of the issues around them. And also it is very hard. These are the incident that doesn't happen for every single babies. Yes, we have a mechanism, as part of my research grant, to collect data, to monitor these. But the thing AI vowel allows that making this data collection event, even understanding and studying these hypotheses, that you may have the relationship between a specific motor movement to specific diagnosis possible. A lot of the study in the infant domain, neurodevelopmental condition in infancy, suffer from the power, the number of the samples they have, because they have to ask the infants to come in the lab to collect the data. It's very hard. They are going to be outside the natural environment. So AI void is actually also provide a platform to study hypothesis to medical hypotheses related to these, because it can be given to families, many, many families, to study the hypothesis that the physician has for a specific condition that I am working on with our collaborator, clinical collaborator, ASD, autism spectrum disorder. There have been studies here and there that shows that very early on in the motor movement, there are specific types of motor development that we expect to see in every infant. Every movement, every new milestone starts very asymmetric. But kids, neurotypical kids, start making a symmetric way of movement. Actually, one of our collaborators always say that symmetry is the hallmark of development. But unfortunately, some of the kids that they are suffering from neurodevelopmental disorder, they don't reach to that symmetry. And they, they very asymmetric throughout their development, or they reach symmetric movement, for example, sitting symmetrically, walking symmetrically very later in life. And then these are the stuff that we can teach them the algorithm, even with less amount of data. In fact, one of the theoretical contribution of my lab is computer vision with a small data, while everybody else talking about big data. My lab is actually focused on the small data domain, the domain where data collection or labeling is very expensive, somehow impossible, especially if the subject is not very cooperative and also protected with very strong privacy. And security laws, which is this infant domain is very exact examples of the domain. So in the small data domain, we are suffering from the data limitation, while the advanced AI needs a lot of data. So my lab makes sure to bring expert knowledge into the problem to somehow close the requirement of millions and millions of samples by teaching some specific physics or medically relevant information to the machine without showing them exact examples. [00:11:37] Speaker B: So you're helping the machine learn by essentially giving it the answer almost to say this is what an abnormal development or movement looks like. It doesn't just have to figure that out on its own. [00:11:47] Speaker C: Yeah, you can put that. [00:11:48] Speaker B: You actually led into my next question already or touched on, at least in terms of I can imagine now with the concerns about AI and publicly available, but data available to the cloud, I suppose, have you encountered, or do you encounter a lot of resistance from potential customers, that is, parents from saying like, whoa, you know, I don't want to, I don't want to have a device trained on my baby that who knows where that information is going. How do you address those concerns? [00:12:13] Speaker C: That's actually a very valid point. Anytime that you have a video at home that the concern of privacy is always there, is around there are the users of this family, the parents that they want to use that. One thing that the venture that we are working on, we are benefiting is that many parents, they already use the baby monitoring at home. And many of these system, they are already cloud based because they send the data cloud, but they don't get anything back. The parents, they just send the data, but they don't receive anything back. So what we want to do is say that this data is sent to the cloud, we are actually analyzing and send it back to you. But the problem of the privacy is a bigger problem, especially when we go to specific cultural background of the family. As part of my Natural Science Foundation Career award, I'm also focusing on parents in Puerto Rico. Interestingly, the parents there, based on specific cultural practices, they don't like to have their baby recorded because they are co sleeping with babies, a lot of families. So they don't want them to be recorded longer. However, when you talk to the pediatrician, they say that is actually everybody is encouraged to put the baby in their crib so they don't go asleep that much. So there are a lot of things come when we are talking about babies. Babies are one of the, if they are not the most important things in the life of parents, but one of the most important thing. So this discussion is out there and I think that's a good discussion to have, because we make our model a better the system that is more equitable. Everybody can use that. People in the Boston and people in Puerto Rico, both, they are willing to use that because of the advantages that they are getting on that. So I want to also educate parents, use it on the benefits that they can get by early detection, by avoiding some of the risk and accident that may happen, and also by summarizing the kids journey while are under the watch of these AI monitors. [00:14:00] Speaker B: Sir, tell us a little bit more about augmented cognition, apart from AI wovework. I mean, it seems like this is a field that probably has no limit. I was just going to mention the example checking out my iPhone recently, and I noticed that it seems like every time I check, I guess, the health app on the standard health app on the iPhone, and it's giving me more and more metrics that it's measuring, including my gait of my walk and length of steps and so on, and along the same lines, and that if I had suffered a mild stroke, for instance, it would probably detect that my walking had changed and my weight shifting has changed and would alert a medical provider. So I'm guessing that this field is probably exploding. Tell me a little bit more about what are some of the other projects or research that you're working on. [00:14:48] Speaker C: Yeah, I mean, AI is here to stay. I mean, we are hearing that. So my work is at the intersection of computer vision and machine learning. So I usually process a visual video, data, images, and anything that relate to visual perception problems. So detecting, tracking, pose estimation, activity, recognition, and I mainly focus on detecting and tracking human or animal in their natural habitat or at home. So, augmented cognition lab. I started the name of the lab even when I was the second year PhD student because I was fascinated by AI. I wanted to do computer vision at that time, that even before the era of the deep learning, but I just didn't want the specific annotation that the AI brings that you think that you just want to exclude human from the picture. So augmented cognition is actually about that enhancing human information processing capability by processing the information in a better way, but making the human mind augmented. So imagine a ladder for the brain to just make the the brain augmented. So rather than AI, I was actually using augmented cognition, the name of the lab. And then I was fortunate enough that I landed an assistant professorship job at Northeastern University, and recently I got promoted to the associate professor. Yeah, but the idea is that we are processing video data. I do have both healthcare and military application funded by United States army. But both of them, the common denominator in this project is I'm making computer vision and machine learning model that they need to be data efficient when we don't have a lot of data. Remember I mentioned about the privacy issue in the infant domain? When it comes to the specific army application defense application, we have the security issue. Both of them. You cannot collect a lot of data, or even you collect data. You cannot just ask your PhD student to go ahead and label them and annotate and find a specific thing, because you need experts that they are actually certified for, both army application and healthcare application. So it's very expensive to collect data, and it's even more expensive to annotate and label those data and then teach machine what to do. So because of that, I bring a lot of physics based and knowledge based information into the algorithm for them to be able to work with less example, but still they receive the knowledge. [00:17:08] Speaker B: So, since this series that we're doing is about the smart home industry, it's confirmed, I think, by everything you've said, this seems like it would have applications across the age ranges, right? So not just. Not just for infants in a crib, but for toddlers who are developing, but up until people end of life issues with dementia and so on. Are all those applications already around, or are they being developed so that you essentially are doing the same thing? You're looking at someone who may have some cognitive renewal issues, and they haven't been diagnosed yet, but there are certain clues that their body is giving off that could be picked up by something the equivalent of AI wolver, except for adults and. And people approaching end of life. Is that accurate? [00:17:50] Speaker C: This is absolutely correct. I know a lot of collaborators and also colleagues that they are working in the domain, looking at the specific neurodegenerative condition that is effect on them, on physical behavior of the patient or the person, on their studies. The thing that made me specifically work on the infants, because I look at all of these advancement that happening in the AI and visual processing domain, and I started applying that on the infant, and I had footage of my son in his crib, and I apply that, and it miserably failed. So none of those model have seen the babies. So they didn't even. They couldn't recognize babies from the background, so they didn't work. So that's. That was the reason I said, okay, now I need to teach the model to see the baby and then understand what babies is doing. So, that was the whole procedure that I started four years ago that I started teaching the model on being adapted to the babies, to infants. But as you mentioned, the toddler who's another the specific topics that we can, and we have started to also following those babies that we started monitoring early on in the creep in the toddler who the movement is different. And then there are a lot of very capable labs and colleagues. They are working on looking at the elderly side, what can be detected, but looking at the visual information for long term, which this AI system allows us, and in an unobtrusive way, while all of us have a smartphone and a smartwatch that can detect some of the wearable data from us, but having a camera, that is nothing, you're not wearing anything. It can watch you and give a very richly informative information from your life. It's one of the venue that people are pursuing, but there is still a lot of work that needs to be done. [00:19:33] Speaker B: Let's talk a little bit about the commercialization piece of it. A lot of folks that we've had on the show, they're very similar to you. They're a researcher. They've come up with a very good idea. It has commercial potential. And then they think, okay, let's form a little company. Let's try to get this actually going. And in many cases, it's a very difficult, daunting process because, you know, getting it out into the market and tested and so on. How is that going for AI? Woo. Over. What are the next steps? And how involved are you in developing the company? Or have you just found a CEO or someone like that to do a lot of that business development? [00:20:06] Speaker C: You're right. I mean, my first hat has been a professor at the university. And four years ago, my lab was growing, things were on the smooth path, and I was publishing on the small data computer vision, small data machine learning. I had different application domain, but nothing about babies until I decided to become a mom myself and a healthy, happy baby boy in the middle of a very busy scheduler. For both myself and my husband, it's something that, it wasn't something that we couldn't handle. But unfortunately, a pandemic happened. And without having any help or any family around, both of us became a sleep deprived and we rely on technologies. And one of them was an advanced baby monitor that was gifted to us. But one night, while our fancy baby monitor was showing some minimal movement in my son's crib, I heard him giggling at the top of the stairs. And let me tell you, there are various steep stairs that they are common in northeast houses. And then reviewing the baby mitre, actually footage later on, we saw that he had climbed out of his crib and crawled out of his room to the hallway. That was a wake up call. And of course we lowered his crib mattress and install a gate. But it made me think, with my expertise and my lab expertise in computer vision, there must be a better way to monitor babies and ensure they safely when they are out of our sight. And that was the time that I began adapting advanced computer vision algorithm. At that time, it was usually used for sports marketing and security for infant. And as I mentioned, it didn't work. So I had to start building and creating the first infant specific computer vision algorithm. But I wanted, always I wanted to bring this algorithm outside of only the research one. So the transition from the lab to the hands of every parent was the inspiration behind AI ware, the companies that spin up from a nortesian university. I am at the moment on my sabbatical from the school, so I am acting as the CEO of the company. But we have the CEO, we have a president, and we are trying to, at this point, we are very early on, so we are trying to go after funding to make this specific inspiration, the algorithmic research, very heavy research foundation to the reality, and make the first version to go out. And the parents can get a better sleep without being worried that the son, suddenly, in the middle of night, just go from there, one milestone to another, and something that can be a very happy moment become a very, very sad incident. [00:22:46] Speaker B: I have now two twin grandsons, a granddaughter who's four, and then two twin grandsons who are 15 months, and they're both walking all over the place now. And I got to tell you, it's like, if you've ever seen videos of stores getting looted, that's what it's like having the toddlers at our house. They'll just run into a house, a room, the two of them together, they'll start grabbing everything, and there's almost no way you can keep an eye on them because they're very curious and they want to go wherever they can. And so I think, and actually, that's. [00:23:14] Speaker C: One thing, Richard, that you mentioned. So one of the other, outside of the crib, one of the other application of AI movement is actually for grandparents or the caretaker of the kids, especially in the common area, in the play area, to alarm if they get closer to the oven, you know, they, or get closer to a specific area that you don't want to go. And then alarm, because we have all of the detection algorithm, we can detect very, very accurately, where the oven is, where the baby is, and if the distance, the vicinity is less than something, it's going to be alert. And then that allows the parents with much peace of mind leave the kids with the grandparents, and grandparents doesn't need to just run after and get there. [00:23:52] Speaker B: Yeah, I was going to say there's a market there for grandparents who, I mean, we can keep up with them, but it's constant. [00:24:01] Speaker C: There are two of them, actually, there. [00:24:02] Speaker B: Are two of them, and the difference is granddaughter, my wife and I could trade off and, you know, one of us could read a book, but now it's both of us because they're particularly that age. You know, they will go to the kitchen and anything that has a knob, they'll try to turn or they'll try to pull, including, you know, the dials on the oven and everywhere, they just, they want to touch it and feel it and see what it does. [00:24:23] Speaker C: Tell me about it. [00:24:24] Speaker B: Yeah, you really have to follow them around full time to make sure that they don't do something really bad. So, Sarah, I mean, I think you're in a very exciting field, and I really hope that the company does well. Let me ask you a little bit about what we mentioned at the beginning of the show. You grew up in Iran. You did your undergraduate studies there, and I think at least one master's degree there. What was that like? What was the experience like? And just describe for a little bit about what growing up in Iran as a young woman and going to the degrees that you got, what was that like? [00:24:52] Speaker C: I born and raised in Tehran, capital city of Iran, and I came to the United States to begin my PhD and also establishing my new home away from home. Growing up, I was always fascinated by technologies. Cell phone and I believe the Internet all came to Iran while I was still in high school. And I knew that I wanted to choose a major related to computers. However, coming from a family with very traditional backgrounds, was strongly encouraged to pursue medicine. Actually, they wanted me to become a surgeon. They said, you have the grade, you have to become a surgeon. And as one of the, the very few members of my family attending college, when I got accepted into one of the top tier schools in Iran, I promised to study biomedical engineering while studying electrical and computer engineering as a double major. This was actually an acceptable compromise, and I became an engineer, and I truly enjoyed while at the school, every aspects of EC, electrical and computer engineering. But I was in awe of visual data analysis capability of the computers. And I thought the advancement that they can give us at that time, talking about there were no computer vision per se, but it was mostly image analysis. That how computer can detect one piece in the, in the image, very small image and start enhancing that were very fascinating to me. So when I came to to the US, the word artificial intelligence was around, but not at its peak as we are experiencing now. And research into AI and computer vision was my passion. But I wanted, as I mentioned, to see that the algorithm that I'm developing, the code that I'm writing, that they are used by people. So when I started my faculty position at Northeastern, the first thing that I did, I made all of my code publicly available, all of the code that me and my student, they were writing. And you could see that now if you search augmented cognition lab, our codes are all publicly available. But still it was targeted mainly at researcher and I wanted to make something that other people use every day use. So I wanted to make somehow you can call it a computer vision algorithm as a consumer technologies. And this is the idea behind AI warware. One other thing about my lab also I wanted to say that even though I chose an academic job, I am running my lab like a Silicon Valley startup. We have the stand ups, my team members are co founder of the lab, and even each student, they have a pitch for the project that they are working on. So that I see actually being a professor in a very research active university like Northeastern, this allows you to explore both worlds. So when you are changing hat is actually, at least I can see that that is manageable. [00:27:35] Speaker B: So I imagine. Do you must still have lots of family in Iran? Are your parents still there and siblings? [00:27:40] Speaker C: Yeah, they are there in their backyard? Yeah, they are back and they are. Every once in a while I send the news about the stuff that is happening there. To be honest, when I send out some of the. Because I use my son's videos as a platform to test a lot of this algorithm. Remember, it was in the middle of pandemic when I started. No, I didn't have funding for that specific project. So Alex, my son, was the main subject of many of these analysis. I mean, they were video of him, so I wasn't touching him. It was a video that I was applying detection on that. So they are very much excited when they see a paper, an interview, an article coming out. They find Alex's picture there. So that's the main part that they are excited about. [00:28:24] Speaker B: It sounds like you're very happy there in Boston. Would you consider returning to Iran or is that not an option? [00:28:29] Speaker C: This is my home here. I'm a us citizen, so that is my home here and I have been most of my adulthood actually in the United States. I like to take Alexander back home to visit Iran. I haven't had the chance. And a lot of that comes from the, unfortunately the political climate that is there. But it's a beautiful country. I mean it has beautiful nature, beautiful history behind that. So it's a place that I like to go with my family and experience that. But as far as living, I have been, as I mentioned, most of my adulthood I have been here in the US and this is my home. [00:29:08] Speaker B: One more question, Sarah, who was your inspiration? You talked a little bit about your parents wanted you to be a surgeon or a doctor and then you decide to be an engineer. What was your family background? Were there other relatives or aunts, uncles, grandparents who were in the engineering field or close to it or why did you decide to do that? [00:29:24] Speaker C: No one. I mean it was technology. I love math and physics and I just wanted to that, but engineering, I mean it was too nerdy for the girls, so it was no one there. Yeah, I wanted to go to a major that rather than just choosing between one class, history class to, to the math class, I wanted to somehow to choose always going to learn about computers, about the math, about physics. And I was reading about that and people said that engineering is there. So it took some convincing and I had to. Doing double major wasn't easy. I was taking 24 credits per semester and I was just taking two exams at the same time. But looking back, I worked it and I'm very happy that also I chose biomedical engineering as double major because it gave me the perspective of understanding also human body in an engineering sense. So. And then a lot of my work in healthcare has been affected positively by that knowledge. So yeah, I don't want to take it back. These days I'm pretty sure things has changed. I mean, thanks to Internet and technology, people are understanding the effect of engineering and how positive lead can impact the world. So the less resistance happened. As a matter of fact, my sister, ten years or younger than me, recently graduated as a computer engineer from Georgia Institute of Technology, Georgia Tech from Atlanta. So she came ten years after me and then she got graduated there, so. And then that one was much more well received at my journey. But I'm happy because things are changing towards better and people accepting that girls, females, they become engineers. [00:30:57] Speaker B: Do you remember having a teacher or an instructor that sort of said, hey, you should consider this, or was it just that as you said that wonder about technology itself that you fell in love with? [00:31:06] Speaker C: That's a very good question. I have a teacher that said that you shouldn't consider that I went to that specific school for a lack of better word for talented students. So I was going to the all girls school, high school that was very much technology oriented. They were encouraging us to ask questions, scientific questions. So that was a very, very strong and fulfilling background. However, that was clashing a little bit with my family tradition in terms of choosing that. So when I, even the university, I had to make sure to navigate, to stay, always checking with myself and make sure, okay, you can do that. It's not something that you cannot do that. So that was. I wish that they were more encouraging mentor out there. And that was the reason when I was offered in last December to become a director of women in engineering at northeastern, I gladly accepted that. So that position, I think is important because showing that is possible, that the engineering is not something that is for a specific gender. And making the female identifying a student excited and also welcome to this major is important because we need a lot of people here. There are a lot of word problem that engineers can solve, and we have to welcome everybody from every gender, from every sexual background and orientation, from every call chair. They have to come and become engineers. So I'm using this platform to invite people. [00:32:34] Speaker B: You are a very inspirational figure, Sarah. So I really congratulate you on what you've done in your lab with AI wover. I remember seeing the application in our prize competition last year, and the judges were really sort of taken with it and impressed, and I thought it was pretty amazing as well. So, really wish you the best in the development of the product, but in really all of your research and your students, encourage your students and so on. And yeah, we hope to have you back at some point on the show. But thanks very much for, for taking the time to talk to me today. [00:33:04] Speaker C: Thanks for listening. Thank you. [00:33:06] Speaker D: The inventivity pod is produced by the Cade Museum for Creativity and Invention, located in Gainesville, Florida. Richard Miles and me, James di Virgilio, are your podcast hosts. Podcasts are recorded at the Heartwood Soundstage in Gainesville and edited and mixed by Rob Rothschild. Be sure to subscribe to the inventivity pod wherever you get your podcast, and leave a comment or review to let us know how we're doing. Until next time, be inventive.

Other Episodes