Speech-Language Pathology Research Lab Virtual Tours
Six faculty from the Department of Special Education and Communication Disorders guide viewers through virtual tours of their respective labs to discuss their research. \\ 0:03 – Steven Barlow's Communication Neuroscience Laboratories // 7:18 – Angela Dietsch's Sensorimotor Integration for Swallowing and Communication Lab // 10:45 – Susan Loveall's Lab // 11:42 – Kevin Pitt's Augmentative and Alternative Communication Translation Lab // 15:05 – Naomi Rodgers' Stuttering Lab // 17:23 – Yingying Wang's Neuroimaging for Language, Literacy and Learning Lab \\
icon search Searchable Transcript
Toggle between list and paragraph view.
[00:00:03.070]My name is Dr. Steven Barlow.
[00:00:04.970]I'm a professor in Special Ed Communication Disorders
[00:00:09.970]and also the associate director
[00:00:12.350]of the Center for Brain Biology Behavior.
[00:00:16.300]We're in our first laboratory suite.
[00:00:18.960]This is Barkley room 140.
[00:00:22.860]This is where we perform biomechanical
[00:00:26.700]and neurosensory testing of orofacial
[00:00:30.320]as well as hand apparatus.
[00:00:33.710]These studies are done in developmental populations,
[00:00:37.900]including very young children,
[00:00:40.420]as well as individuals with progressive neuromotor disease.
[00:00:45.970]For example, cerebrovascular stroke.
[00:00:49.170]Why don't we just walk in to our test booth here
[00:00:51.630]it can give you a sense of what the environment looks like.
[00:00:59.810]Here we can see the recording suite
[00:01:03.220]for testing what are known as vibrotactile
[00:01:08.540]Here we have a servo control linear motor
[00:01:11.900]that allows us to very precisely test
[00:01:14.540]the somatosensory system.
[00:01:17.010]As you probably know,
[00:01:18.800]movement is highly dependent on sensory cues
[00:01:23.380]for accuracy as well as maintaining production.
[00:01:27.040]So with this device, we can test nerve endings
[00:01:31.150]and mechanical receptors in orofacial tissues
[00:01:34.880]as well as in the digits
[00:01:37.780]as it relates to manipulation.
[00:01:40.640]We also have instrumentation available in this laboratory
[00:01:46.450]to record both single
[00:01:48.180]as well as multi-unit electromyographic signals.
[00:01:53.850]This is particularly useful for biofeedback
[00:01:56.940]or retraining as part of a comprehensive
[00:02:00.640]rehabilitation, therapeutic again, across the lifespan,
[00:02:06.600]We can kind of back out of this laboratory space.
[00:02:11.090]And we'll go next door to our second lab
[00:02:16.960]here in the Barkley Center, and this is room 141.
[00:02:24.480]And we're down on the main floor by the way.
[00:02:26.600]So we're, we're very much isolated
[00:02:29.840]from a lot of the traffic in the clinic.
[00:02:33.600]During a normal day, we'll have several graduate students
[00:02:39.610]working in this laboratory space,
[00:02:41.890]including speech-language pathology majors, and students
[00:02:47.730]from computer science and biomedical engineering.
[00:02:51.660]Let's go a little bit deeper into this lab
[00:03:04.620]So this space is really a collection of recording suites
[00:03:09.940]and also work stop or I should say workstation areas.
[00:03:14.630]Here we have a computer controlled milling machine
[00:03:19.090]that allows us to design and manufacture
[00:03:22.730]specialized sensors for our experiments,
[00:03:26.950]particularly for the somatosensory stimulation studies
[00:03:30.750]as well as some of the biomechanical experiments that we do.
[00:03:36.330]We'll back out of here and just go around the corner.
[00:03:41.090]in this lab.
[00:03:42.570]Here we have a speech aerodynamics test suite.
[00:03:48.250]This allows us to perform non invasive measures
[00:03:53.490]of laryngeal airway resistance
[00:03:55.670]as well as test the efficiency of the velopharyngeal port.
[00:04:01.050]This can be run in subjects as young as four years of age.
[00:04:06.140]And again, we can apply this technology across the lifespan.
[00:04:12.300]Over here, we have our ForceWin
[00:04:15.550]biomechanics recording station.
[00:04:19.570]This is a technology that was created here
[00:04:22.190]at the University of Nebraska,
[00:04:24.600]and has been run now in five different studies
[00:04:28.030]about 250 subjects across the lifespan
[00:04:31.990]and is being applied in cerebrovascular stroke.
[00:04:36.310]Not only is it used for diagnostics,
[00:04:38.640]but we can configure this system for biofeedback.
[00:04:44.900]And I'll just kind of loop by the camera
[00:04:48.830]and we'll just go into our laboratory meeting area.
[00:04:56.520]So you can see this long table with monitor
[00:05:00.500]and some of the equipment cabinets that surround this space.
[00:05:04.110]This is our popular gathering area
[00:05:08.120]to discuss study designs, progress
[00:05:11.880]for individual experiments.
[00:05:15.170]We meet to discuss, for example, undergraduate research
[00:05:19.810]theses, masters theses as well as dissertations.
[00:05:23.920]We also hold regular meetings to discuss progress
[00:05:28.100]on research grants as well as publication status.
[00:05:32.440]So this is a very popular communal area
[00:05:36.100]that gets used very heavily in this lab.
[00:05:39.050]We also have another set of facilities
[00:05:42.550]on the City Campus at the Brain Center,
[00:05:45.630]which is known as the CB3.
[00:05:48.190]And that includes
[00:05:49.890]the Functional Near-Infrared Spectroscopy lab.
[00:05:54.230]That's a neuroimaging facility
[00:05:57.410]that allows us to do hemodynamic
[00:05:59.790]recordings again in a variety of ages.
[00:06:03.930]That lab has been pretty busy in the last six months
[00:06:06.860]where we've started some studies, Dr. Wang in particular,
[00:06:13.610]examining cochlear implant
[00:06:16.630]brain mapping in children and adults.
[00:06:19.670]And we've also done sensory motor studies
[00:06:22.450]using that technology.
[00:06:24.710]The Brain Center also is home
[00:06:27.810]to our three Tesla Siemens MRI scanner.
[00:06:32.670]And our laboratory along with Dr.Dietsch and Dr. Wang
[00:06:37.560]have conducted and are conducting many different experiments
[00:06:41.410]using MRI imaging technology.
[00:06:44.980]So together our laboratory spans both the City Campus
[00:06:50.700]and the East Campus.
[00:06:51.950]Which is home to Special Ed and Communication Disorders.
[00:06:55.940]And this provides a very rich collaborative experience.
[00:07:00.290]Engaging students in interactive studies with students
[00:07:05.720]from a variety of departments and both neurotypical
[00:07:10.130]as well as neurologically impaired subjects.
[00:07:13.160]So thank you for visiting the Communication
[00:07:21.831]Hi, my name is Angela Dietsch
[00:07:23.580]and I am a faculty member here
[00:07:25.260]at the University of Nebraska-Lincoln.
[00:07:26.880]I am also the director of the Sensorimotor Integration
[00:07:30.260]for Swallowing and Communication Lab,
[00:07:32.290]which is where we are now.
[00:07:35.050]Prior to switching to the academic world,
[00:07:38.470]I was a speech pathology clinician for quite a few years
[00:07:42.460]in all levels of care.
[00:07:44.000]And I am very glad to be able to bring that clinical
[00:07:46.890]experience to teaching here at the University of Nebraska
[00:07:50.950]and also to the research that we're doing here in this lab.
[00:07:54.310]So here in the SISC lab,
[00:07:57.050]among many of the projects that we're working
[00:07:59.570]on, a common thread is that we are interested
[00:08:02.150]in looking at the kinds of tools and assessment
[00:08:06.640]processes that we are already using,
[00:08:08.440]as well as intervention strategies.
[00:08:10.520]To make sure that they are as efficacious
[00:08:12.960]as they can be, and that they are being selected
[00:08:15.660]for the right clients.
[00:08:16.980]And we're also interested in developing new strategies
[00:08:19.950]if that becomes possible.
[00:08:21.920]So we have a lot of the same kinds of tools
[00:08:25.680]that are used clinically that we use in our research
[00:08:28.640]and then that we can also use during training and teaching.
[00:08:32.380]So various things like this setup
[00:08:35.660]over here is specifically designed to measure
[00:08:40.100]on EMST devices and the kinds of pressures
[00:08:43.130]that are required so that we can select the right kinds
[00:08:45.970]of devices for the right kinds of patients.
[00:08:48.220]And this is a tool that has been very recently incorporated
[00:08:53.210]into both speech and swallowing therapy
[00:08:55.960]that we borrowed from our respiratory therapy friends.
[00:08:59.190]We also have instruments like this that measure
[00:09:02.300]tongue pressure against the roof of the mouth,
[00:09:04.010]which can be important
[00:09:05.230]for both speech and swallowing purposes.
[00:09:07.630]And we have a full array of software
[00:09:09.810]that can do voice analysis for clients
[00:09:13.230]who are having concerns about their voice quality
[00:09:16.740]or how they're using their voice
[00:09:18.240]as part of their communication.
[00:09:20.950]For individuals who have voice or swallowing problems,
[00:09:24.500]one of the most common clinical instrumentation tools
[00:09:28.430]that's available is endoscopy.
[00:09:30.650]We have a full endoscopy setup here,
[00:09:32.900]as well as mannequin that we can practice
[00:09:36.800]this skill because learning on each other is not optimal.
[00:09:42.500]And then we also have because part of the research
[00:09:46.080]that we're doing related to swallowing disorders
[00:09:49.870]is about how sensory input influences
[00:09:53.660]the motor output or the swallowing behavior.
[00:09:57.120]We also have a full taste lab where we can create
[00:10:01.020]our own taste stimuli using precise scientific methods.
[00:10:05.530]So all of the stuff from AP chemistry
[00:10:07.870]now comes back into real life.
[00:10:12.200]And we can then create the kinds of taste stimuli
[00:10:16.290]that we think would be most beneficial
[00:10:18.170]and test it in our patients.
[00:10:20.850]And our clients both here in the lab,
[00:10:24.080]and also at CB3,
[00:10:25.940]which is where we do a lot of our research.
[00:10:29.340]CB3 is a neuroimaging center
[00:10:31.460]located on the City Campus.
[00:10:33.930]And that's also where my office is located.
[00:10:36.680]So we have lots of access to be able to incorporate
[00:10:40.540]that kind of research technology
[00:10:42.720]into all of the clinically based things that we're doing.
[00:10:48.930]Hi, I'm Susan Loveall.
[00:10:50.500]I'm a professor in the Department of Special Education
[00:10:53.820]and Communication Disorders.
[00:10:55.660]And we are in my lab, and specifically
[00:10:58.470]we're in the observation room
[00:10:59.770]where parents are able to observe testing
[00:11:02.580]of their son or daughter.
[00:11:05.470]My research focuses on learning, language,
[00:11:08.450]and literacy in individuals who have intellectual
[00:11:11.720]and developmental disabilities, such as Down syndrome.
[00:11:15.450]The goal of our research is to better understand
[00:11:18.520]these skills and cognitive development in general
[00:11:21.630]in these populations so that we can find better ways
[00:11:24.850]of teaching them.
[00:11:26.420]We are always looking for bright and hardworking students
[00:11:29.550]to help out in the lab,
[00:11:31.140]and the things students help with are helping test
[00:11:34.810]participants, score assessments and data entry.
[00:11:39.130]And thank you for coming by to see the lab.
[00:11:45.300]Hi, my name is Kevin Pitt.
[00:11:46.330]And this is the Augmentative
[00:11:47.450]and Alternative Communication Translation Laboratory.
[00:11:50.380]For those of you who haven't heard about AAC before
[00:11:52.750]or augmentative and alternative communication
[00:11:55.140]it's the area of the field that seeks to provide support
[00:11:58.520]or replacement communication method for those individuals
[00:12:01.710]who find traditional methods of communication
[00:12:04.330]such as speech or writing, either ineffective
[00:12:08.140]or just inefficient.
[00:12:10.600]In our lab, we're primarily interested
[00:12:12.440]in providing AAC methods for those individuals
[00:12:15.070]with severe physical impairments.
[00:12:17.120]So we're thinking about potentially children
[00:12:19.290]or adults who have cerebral palsy,
[00:12:21.850]or potentially adults who have conditions
[00:12:23.710]like a amyotrophic lateral sclerosis or ALS.
[00:12:26.890]And some of you may have heard about
[00:12:28.360]the late Stephen Hawking and those kinds of conditions.
[00:12:31.350]And we're thinking about those people
[00:12:32.750]that have really severely impaired voluntary movements.
[00:12:37.400]In the lab, we're really interested
[00:12:38.760]in how we support communication for these individuals
[00:12:42.470]both today but also looking towards the future as well.
[00:12:45.980]So for instance, right now, individuals who have severe
[00:12:48.780]physical impairments may access AAC devices,
[00:12:51.890]such as this one right here,
[00:12:53.480]through eye gaze technology.
[00:12:55.500]And eye gaze technology
[00:12:56.560]works by a little infrared sensor down
[00:12:59.430]here at the bottom detecting the location of the pupils.
[00:13:03.540]So therefore, an individual can select an item
[00:13:06.540]in the communication display
[00:13:08.310]by orienting their eyes towards a given item
[00:13:11.880]and then staring at that item for a shortly
[00:13:14.090]extended period of time or blinking for instance.
[00:13:17.780]Regarding future types of AAC access technologies,
[00:13:21.430]my lab is primarily concerned
[00:13:23.120]about the application of brain computer interface
[00:13:25.610]technologies for AAC, or BCI.
[00:13:28.430]So brain computer interfaces seek to directly translate
[00:13:32.330]an individual's brain activity
[00:13:34.960]and provide communication control.
[00:13:37.750]So for instance, now instead of having to, say, physically
[00:13:41.110]move your hand to guide a mouse cursor across the screen,
[00:13:44.710]you could now simply just imagine moving your hand
[00:13:47.780]and that would still have the same effect.
[00:13:50.150]So to control these devices in our lab,
[00:13:52.070]the individual wears an electroencephalography cap
[00:13:55.270]or an EEG cap such as this.
[00:13:58.000]Our cap has 32 electrodes in it
[00:13:59.710]and each one of these electrodes records brain
[00:14:02.040]activity at the level of the scalp.
[00:14:04.310]Then using a computer, we show them different things.
[00:14:08.130]We have them perform different tasks.
[00:14:10.120]And we look at the neurological the brain activity
[00:14:12.490]associated with those tasks.
[00:14:14.380]And then look at how that can be translated
[00:14:16.590]to provide communication access,
[00:14:18.850]even for those who have no form of motor movements.
[00:14:21.830]So individuals who have total locked in syndrome
[00:14:24.360]or disorders of consciousness and those people
[00:14:26.860]in a total locked in state.
[00:14:29.040]So very broadly, my lab is really interested
[00:14:31.180]in how we develop and implement these types of AAC devices
[00:14:34.830]for a range of populations,
[00:14:36.420]both children and adult.
[00:14:38.370]It's also really important in this translation
[00:14:40.760]that we also consider how we train individuals
[00:14:43.050]to use these devices.
[00:14:44.700]So that can include both the user of the device
[00:14:48.150]but also we've got to think about caregivers,
[00:14:50.230]how we train speech-language pathologists
[00:14:52.460]and interprofessional multi disciplinary collaborations
[00:14:56.230]as well to make sure these devices
[00:14:57.840]can be implemented effectively to provide
[00:15:00.180]functional communication to these individuals
[00:15:02.270]who may have no other form of communication.
[00:15:08.120]Hi there, my name is Naomi Rodgers.
[00:15:10.910]I'm a faculty member in the Department of Special Education
[00:15:13.820]and Communication Disorders.
[00:15:15.880]I'm also the director of the Stuttering Lab.
[00:15:18.480]So stuttering affects around 1% of the population.
[00:15:22.120]Stuttering is hallmarked by a disruption
[00:15:24.370]to the forward flow of speech.
[00:15:26.780]And so people who stutter will produce sound
[00:15:31.860]repetitions, prolongations and blocks
[00:15:36.850]in their speech like that.
[00:15:39.200]Lots of people who stutter also demonstrate
[00:15:40.920]what are called secondary behaviors.
[00:15:43.230]And they will do things like look away from their listener
[00:15:47.160]as they're stuttering or perhaps move their head
[00:15:49.450]or limbs as they're stuttering.
[00:15:51.410]Stuttering is often really uncomfortable
[00:15:53.090]for both the speaker and the listener.
[00:15:55.260]And so people who stutter oftentimes
[00:15:58.120]will utilize lots of tricks
[00:16:00.210]to avoid stuttering, like using filler words,
[00:16:06.970]swapping in words that are easier to say,
[00:16:09.690]or avoiding interactions altogether.
[00:16:12.730]Here in the Stuttering Lab,
[00:16:13.840]we're interested in the cognitive,
[00:16:17.160]affective and social factors that influence
[00:16:20.750]how people who stutter experience stuttering,
[00:16:23.450]and also how they take steps
[00:16:25.180]towards making a positive change to their stuttering
[00:16:27.910]and overall communication.
[00:16:30.110]Our research approach is interdisciplinary
[00:16:33.330]as we integrate scientific approaches
[00:16:36.700]from the fields of behavioral health, health psychology
[00:16:39.740]and cognitive science.
[00:16:41.820]Students who are interested in stuttering are encouraged
[00:16:44.270]to get involved in the lab.
[00:16:45.800]Student research activities include things like transcribing
[00:16:50.720]and analyzing interviews with people who stutter,
[00:16:54.880]creating and disseminating surveys
[00:16:58.220]to people who stutter and the speech-language pathologists
[00:17:00.720]who serve them, administering assessments
[00:17:05.220]that tap into the holistic nature of stuttering,
[00:17:07.750]as well as running experimental tasks
[00:17:10.370]in real time with people who stutter.
[00:17:13.020]The overarching goal of the Stuttering Lab
[00:17:14.730]is to better understand the psychosocial nature
[00:17:17.510]of stuttering so that we can maximize therapeutic approaches
[00:17:20.850]for those individuals.
[00:17:22.770]Thanks for your interest.
[00:17:27.230]Hi, my name is Yingying Wang.
[00:17:30.080]I'm currently an assistant professor
[00:17:32.650]at University of Nebraska-Lincoln
[00:17:36.380]at the Department of Special Education
[00:17:38.250]and Communication Disorders.
[00:17:40.300]And then I'm the director of Neuroimaging
[00:17:42.920]for Language, Literacy and Learning.
[00:17:45.910]So, here we are, here is the MRI simulator room.
[00:17:50.930]And there this is one of the system
[00:17:53.230]I use a lot for my one of the project
[00:17:56.140]is to study children who are learning
[00:18:00.020]to read and how actually the brain changes over time.
[00:18:05.010]And then especially not just in typical children,
[00:18:09.110]and then also in children who are deaf or hard of hearing.
[00:18:12.320]Why we need this simulator is because for younger children,
[00:18:15.880]especially for my interest group,
[00:18:18.480]they are younger than six years old.
[00:18:21.210]So for them to actually adapt to the environment
[00:18:23.990]and to know what the true MRI room
[00:18:26.270]will be and then what the noise
[00:18:27.790]is and then where the table is and also we make
[00:18:30.500]it a child friendly.
[00:18:31.550]We tell them this is a jungle,
[00:18:33.270]so they will be playing a game in the real MRI room.
[00:18:36.400]So give them a half an hour
[00:18:37.790]before the real session actually increasing the chance
[00:18:40.800]for them to be successful.
[00:18:42.600]My lab is trying to use state of the art
[00:18:46.730]neuroimaging techniques to understand the biological
[00:18:50.560]mechanism of certain behaviors.
[00:18:53.150]Or brain plasticity.
[00:18:56.980]Brain plasticity can be due to experience
[00:19:01.090]or maybe some insult to the brain,
[00:19:03.860]traumatic brain injury or stroke,
[00:19:05.800]causing the brain to change.
[00:19:07.930]What happen in the brain, the neural basis
[00:19:10.500]is what I really focus on in our lab.
[00:19:13.110]So currently we have two main project.
[00:19:15.620]One we call it BDR,
[00:19:17.770]which stands for brain development of reading.
[00:19:20.440]So we studied children.
[00:19:22.780]How their brain changes over the period
[00:19:25.610]when they learn to read.
[00:19:26.730]Another project we call it BACIC.
[00:19:29.090]So it is brain activation of cochlear implant candidates.
[00:19:33.730]So for those individuals,
[00:19:35.350]they are going to get a cochlear implant,
[00:19:37.680]and then we will ask them to come to our lab
[00:19:39.900]to get brain imaging data.
[00:19:42.270]And then also follow them longitudinally.
[00:19:44.680]The goal is we want to understand anything in the brain
[00:19:49.120]can that help us to predict their surgical outcomes,
[00:19:53.140]mostly focus on their speech
[00:19:54.770]perception ability after surgery.
[00:19:57.250]And then if you don't know about cochlear implant.
[00:20:00.390]It's a device you actually implant in your inner ear
[00:20:03.650]to help those individuals
[00:20:05.510]who has severe to profound hearing loss to regain
[00:20:08.770]the ability to hear.
[00:20:10.020]So this process is not like we get glasses
[00:20:12.960]and then you have the glasses you can see right away.
[00:20:15.940]Our brains have to adapt to this new device
[00:20:18.700]and to learn to establish new pathway
[00:20:22.590]to learn the new sensory because the cochlear implant
[00:20:25.710]only gave us a degraded sound signal.
[00:20:28.420]It's not the traditional sound we hear
[00:20:31.760]as a typical hearing person.
[00:20:33.730]So this is the actual MRI system
[00:20:36.340]and the 3 Tesla Siemens Skyra.
[00:20:39.560]And this is where actually, this is the machine
[00:20:42.770]my lab uses a lot to collect functional MRI data
[00:20:47.290]and located in Center for Brain, Biology and Behavior, CB3.
[00:20:52.150]So CB3 offers a unique opportunity for faculties
[00:20:57.450]across campus to collaborate
[00:20:59.710]with interdisciplinary projects.
[00:21:02.170]Here you are in the fNIRS lab.
[00:21:04.530]So fNIRS stands for functional near infrared spectroscopy.
[00:21:08.530]So if you look at here we have those light sensors
[00:21:11.690]and the red is the source and then the blue is the detector.
[00:21:15.030]And then what happens is when we actually showing
[00:21:18.120]a paradigm to the participants,
[00:21:20.110]their brain activity, were causing
[00:21:22.410]the deoxygenated hemoglobin and oxygenated hemoglobin content
[00:21:27.430]changes, and which can be detected by this light sources.
[00:21:31.400]So that's how we can detect the function of the brain.
[00:21:34.660]And then we use this one to study individuals
[00:21:37.320]with cochlear implant
[00:21:38.680]because when someone had a cochlear implant,
[00:21:41.500]they can't go into the MRI scan anymore.
[00:21:43.960]So this device is actually compatible
[00:21:46.720]with the cochlear implant users,
[00:21:48.860]and they're safer for them
[00:21:51.340]to get their brain imaging scan.
Log in to post comments