Identifying Microparticle Clusters in High Resolution: Computer Vision Applied to Polymer Particles in Liquid Crystal (LC) to Enable On-the-Fly Characterization of Their Morphology and Size
Michael Batavia - Parallel I
Author
09/26/2024
Added
1
Plays
Description
Student’s name: Michael Batavia
Home Institution: NYU Tandon School of Engineering
NNCI Site: CNF @ Cornell University
REU Principal Investigator: Prof. Fengqi You and Prof. Nicholas Abbott
REU Mentors: Soumyamouli Pal and Guangyao Chen
Abstract: In-situ monitoring of polymer particle formation by iCVD-in-LC is limited by relatively low-resolution (LR) capabilities compared to its ex-situ microscopy approaches. In this paper, we address this challenge by fine-tuning and developing an object-oriented super resolution model to improve the image resolution obtained from the in-situ monitoring and enable identification and characterization (e.g. size estimation) of individual polymer particles both singly dispersed and aggregated as clusters. This framework will transform low-resolution images into high-resolution object-level high-resolved (HR) images by learning from LR–HR image pairs obtained from samples observed both in the iCVD reactor (LR) and in the laboratory using a high-resolution microscope (HR). Current methods are constrained by the use of object detection, which is limited by the time needed to manually label the clusters during both train and test time for input into the super-resolution neural network. By contrast, our method uses classical computer vision techniques to isolate the slides where the particle clusters lie, keeping spatial awareness and allowing for scalability and little to no preprocessing for inputting LR images into the super resolution network. In this project, we used 5µm polystyrene particles dispersed in 5CB liquid crystal as a surrogate for polymerization via iCVD as samples for the LR and HR dataset. To align the image pairs to maximize the surface area of the slide and clusters shown, we converted images to grayscale, applied a Gaussian blur (σ = 20), identified and filtered the image contours with maximum area via the marching squares algorithm, and added a Hough Transform on the LR images to correct for any rotational vibrations added by the iCVD reactor during imaging. These slides were split into a 80/10/10 train/test/validation split and fine-tuned on the Real-ESRGAN super resolution model with pretrained weights. Resolution upgrade was quantified using the peak signal-to-noise ratio metric during validation and testing. In the end, we show that super resolution neural nets may present an alternative to predicting the characteristics of iCVD polymer formation compared to post ex-situ microscopy approaches with only use of the LR iCVD reactor image set.
Searchable Transcript
Toggle between list and paragraph view.
- [00:00:00.440]- Hi, hello everyone.
- [00:00:01.440]My name is Michael Petabia
- [00:00:02.640]from the NYU Tangent School of Engineering,
- [00:00:05.420]major in computer science from the Cordell Group.
- [00:00:07.640]And today I want to talk about identifying
- [00:00:09.540]microparticle clusters in higher resolution.
- [00:00:12.360]Where the goal is to identify and characterize
- [00:00:14.280]these microparticle clusters,
- [00:00:15.560]which are these little clusters over here,
- [00:00:17.520]without need for X-situ microscopy
- [00:00:19.480]or are used for a optical microscope.
- [00:00:21.520]Why is this important?
- [00:00:24.180]Well, when you're doing an ICBD reaction,
- [00:00:26.200]a polymerization in an ICBD reactor,
- [00:00:28.860]you typically only have access to this photo,
- [00:00:31.260]which is this low resolution in-situ image.
- [00:00:33.780]And you have to transfer it to an optical microscope
- [00:00:36.100]to characterize these clusters.
- [00:00:38.140]You can see them pretty in detail here.
- [00:00:40.580]The problem is this reaction can take an hour,
- [00:00:42.620]so you need to wait for it to be done,
- [00:00:44.140]then you need to transfer your slide to an optical microscope.
- [00:00:47.300]You draw a slide, start over, one hour process.
- [00:00:51.380]If you want to characterize it during the hour,
- [00:00:54.680]you can't do that.
- [00:00:55.520]You need to wait until the hour finishes.
- [00:00:57.540]So ideally, we want to work with this while it's happening
- [00:01:02.600]instead of having to transfer to this when it's done.
- [00:01:06.320]As you can clearly see, though, you
- [00:01:08.800]can't really see anything in much detail here.
- [00:01:10.940]Well, you can see the individual micro particles here.
- [00:01:13.600]It's kind of hard to see anything here.
- [00:01:15.180]And this is because the camera on top of the ICBD reactor
- [00:01:18.360]has a long-distance focal lens, which
- [00:01:20.900]means the camera's up here and the slide's all the way down
- [00:01:23.980]here.
- [00:01:24.540]It's kind of like zooming in on your smartphone.
- [00:01:27.320]You kind of lose the sense of position.
- [00:01:34.180]Yeah, and you can't really see what
- [00:01:35.560]you're looking at very well.
- [00:01:36.680]And also, the ICBD reactors are rotating,
- [00:01:39.320]so this image right here is moving up there, left, right.
- [00:01:42.760]You can't get the same photo from the same point of view.
- [00:01:46.280]So the goal of this project is that we
- [00:01:49.680]want to turn this image into something similar to this image,
- [00:01:53.540]or we want to upscale it.
- [00:01:55.140]So we get to look at the clusters and high
- [00:01:57.100]resolution without needing a microscope.
- [00:02:01.120]Well, how do we do this?
- [00:02:02.780]Well, we're going to use the power of neural networks.
- [00:02:06.100]And we're going to convert this low resolution image
- [00:02:10.220]into an upscale version using a super resolution neural network
- [00:02:13.480]for SR neural network.
- [00:02:14.980]And the longer the SR network trains,
- [00:02:20.200]the better the quality of the upscale image you get,
- [00:02:22.540]and the more you can characterize
- [00:02:24.160]the individual particles, which is what we're interested in,
- [00:02:26.880]which is good for us.
- [00:02:28.780]So currently used methods is optical protection,
- [00:02:31.200]where you draw a box around each of the clusters
- [00:02:33.300]you're interested in.
- [00:02:34.500]And this has the advantage of being ridiculously easy,
- [00:02:37.160]because you just draw a box across anything
- [00:02:39.120]you're interested in.
- [00:02:40.200]And you can immediately take this to the neural network.
- [00:02:43.060]Disadvantage is you lose all facial awareness,
- [00:02:45.360]because you're taking away the clusters from the slide.
- [00:02:47.980]And this requires lots of manual labor.
- [00:02:50.820]You need to do this for all parts of your data set.
- [00:02:53.820]Your data set is 220 images, you need to do it
- [00:02:56.660]for all 220 images.
- [00:02:59.300]My method is novel, because I'm using computer vision
- [00:03:01.700]techniques, where we're going to align the images so
- [00:03:04.400]that the features of interest, which are these things,
- [00:03:07.780]are focused in the image.
- [00:03:09.140]Which means we want to eliminate the background,
- [00:03:11.640]and we're going to characterize the upscale resolution using
- [00:03:16.800]a couple of metrics, and by also looking at the upscale.
- [00:03:20.140]And we do our standard 80-10-10 trade test validation split.
- [00:03:24.840]As an advantage, no manual labor.
- [00:03:26.440]Everything is automatic, scalable to any new input.
- [00:03:30.040]This advantage is freaking kingdom to do.
- [00:03:33.980]And also, the characterization of the particles
- [00:03:36.040]is dependent on the quality of the upscale.
- [00:03:38.000]So you're not going to get good characterization if you can't--
- [00:03:41.620]the upscale isn't very great.
- [00:03:44.540]So we have to align the images.
- [00:03:47.980]How do we do that?
- [00:03:49.240]Here's how we do it for a low-resolution image,
- [00:03:51.780]and here's how we do it for a high-resolution image,
- [00:03:53.940]with the low resolution being the more problematic one.
- [00:03:56.220]And the high resolution is not that bad to do.
- [00:03:59.240]First step we do is we apply a Gaussian Blur,
- [00:04:01.420]and then we do some adaptive thresholding.
- [00:04:03.620]This gets rid of the background noise.
- [00:04:05.560]We don't really care about this.
- [00:04:06.980]We don't care about this.
- [00:04:08.360]And we segment the image into some initial foreground
- [00:04:10.860]and background.
- [00:04:13.120]Next, we apply a hot transform.
- [00:04:14.780]This is correct for the rotational up there
- [00:04:16.940]on the right on the IC reactor.
- [00:04:19.400]There's typically like a one to two degree
- [00:04:21.380]rotational difference we'll have to do.
- [00:04:23.380]It's hard to see right here, but it definitely
- [00:04:25.420]makes a difference.
- [00:04:26.000]And the alignment, it's only possible
- [00:04:28.880]if you can get the particles in the same position
- [00:04:30.920]for both the low res and the high res image.
- [00:04:33.540]Next, we run the adaptive thresholding
- [00:04:36.400]to straighten some of the contours.
- [00:04:37.900]And you see it kind of makes it more like geometric.
- [00:04:42.000]And then we fill in all of these holes over here
- [00:04:44.620]because we just want to get a slide over here.
- [00:04:47.980]We don't particularly care about that.
- [00:04:49.680]We can isolate that later.
- [00:04:51.660]And this also cleans up some stuff over here.
- [00:04:55.780]This cleans up, basically, noise.
- [00:04:58.080]And then finally, we use Marching Squares algorithm
- [00:05:00.860]to detect all the contours in the images.
- [00:05:03.460]And the square over here, which contains the micro particles
- [00:05:06.340]clusters, is a contour.
- [00:05:07.840]So by detecting all the contours,
- [00:05:09.760]we'll eventually get a square.
- [00:05:11.640]Marching Squares algorithm basically
- [00:05:13.300]works by connecting the dots.
- [00:05:15.100]If you have a binary image, which is 1 from 0,
- [00:05:17.360]you have a lookup table where you can
- [00:05:19.500]draw a line between the points.
- [00:05:21.740]And by using this algorithm, if we have an image such as this,
- [00:05:25.560]I'll have to click this, boom, square detected.
- [00:05:31.440]And then we also have to basically filter over
- [00:05:35.140]all the contours found, because we have a lot of noise.
- [00:05:37.640]But we're really only interested in the largest contour, which
- [00:05:40.300]is the slide, which contains the microparticle clusters.
- [00:05:43.420]And boom, we found the largest contour.
- [00:05:45.460]And now we can basically just crop out the background.
- [00:05:48.080]And here we have our data set.
- [00:05:50.400]So there's a lot of preprocessing required,
- [00:05:52.240]which is the most important thing for machine learning.
- [00:05:54.300]Machine learning isn't the hardest
- [00:05:55.340]part to this. And we're done.
- [00:05:58.650]Now we can do the super resolution.
- [00:06:01.050]So we have a couple of metrics here.
- [00:06:02.750]NEEQ, PSNR, peak signal-to-noise ratio,
- [00:06:05.310]and structured similarity index.
- [00:06:08.150]NEEQ is just like how good is the image,
- [00:06:10.810]the blind quality metric.
- [00:06:12.650]We got PSNR, which is the pixel-by-pixel comparison
- [00:06:15.450]of the high resolution versus the upscale image.
- [00:06:18.110]And we have the structured similarity index,
- [00:06:20.090]which is the pattern-by-pattern comparison,
- [00:06:22.530]which is looking at blocks of images.
- [00:06:25.210]And here you can see the training of the neural network.
- [00:06:28.330]So the structured similarity index of the neural network,
- [00:06:31.850]which is ESRGAN for epoch.
- [00:06:34.010]And then you have the peak signal-to-noise ratio.
- [00:06:36.530]We want this to be as high as possible.
- [00:06:38.550]Ideally, the graph should look like this.
- [00:06:41.150]This requires some more training.
- [00:06:43.690]This is like length of progress of training
- [00:06:45.990]in the epochs, took about three days.
- [00:06:48.890]So here we have general inference.
- [00:06:51.290]What we did is we just trained,
- [00:06:52.830]we told the ESRGAN to just upscale the image
- [00:06:56.810]without its prior training,
- [00:06:58.010]like anime images, and it does decently well.
- [00:07:02.230]But then we did a fine tuning where we trained
- [00:07:04.410]in their own level specifically on our data set.
- [00:07:06.970]And we used a 10% of the testing data set
- [00:07:09.150]to see how well it does.
- [00:07:10.670]And it actually does quite an improvement.
- [00:07:12.850]You might be saying, well, this contradicts,
- [00:07:14.490]I can't see any of the clusters here.
- [00:07:16.710]Typically this number needs to be around 30
- [00:07:18.710]for the visual aspects to relate here.
- [00:07:22.510]But you're mostly, you're looking,
- [00:07:24.110]you want to look at the visual aspect, mostly.
- [00:07:26.690]The metrics are like helpful
- [00:07:27.690]to complement the visual.
- [00:07:32.610]Now as I've been referring to all this time,
- [00:07:34.770]where are the use cases for this?
- [00:07:36.370]We can, mainly the use cases are
- [00:07:38.530]we can detect the microparticle clusters
- [00:07:40.330]in the upscaled image.
- [00:07:41.330]By doing some more computer vision trickery here,
- [00:07:43.890]we start with the upscaled image.
- [00:07:45.770]Then we can isolate the clusters.
- [00:07:48.030]We can do some, we can clean up
- [00:07:49.870]and make their contrast a little bigger.
- [00:07:52.450]And boom, we can detect all of them and their positions.
- [00:07:57.370]We can also detect the individual micro particles,
- [00:07:59.910]which is really, really, really what we want.
- [00:08:03.310]This is on a different image.
- [00:08:04.410]This isn't the same image,
- [00:08:05.670]but you can see each of these little circles
- [00:08:07.730]we've done using a morphological erosion,
- [00:08:10.510]begin to do this, run this contour detection,
- [00:08:13.330]and we can get each of these particles
- [00:08:15.310]and we can characterize by drawing a bounding box
- [00:08:17.810]and getting an area, parameter,
- [00:08:19.210]whatever else we desire out of whatever you chemists want.
- [00:08:23.470]So conclusions is that it's possible to use
- [00:08:27.050]a super-resolution neural network
- [00:08:28.630]to upscale a low-resolution image
- [00:08:30.850]to basically make out some of the clusters
- [00:08:35.750]and the part levels.
- [00:08:36.870]We can approximate the area of perimeter
- [00:08:38.590]using a bounding box.
- [00:08:40.310]It's gonna take some time when we're fine-tuning
- [00:08:42.670]because the visual needs some more time
- [00:08:47.670]compared to just flat-out inference,
- [00:08:50.130]but we can do this using alignments.
- [00:08:51.810]We don't have to spend 40 hours using object detection
- [00:08:55.790]to detect each of the individual
- [00:08:56.730]clusters and the individual micro particles
- [00:09:00.030]in the clusters still have some promise
- [00:09:01.510]on being detected for some more investigation.
- [00:09:04.310]I'd like to thank all of my mentors and PIs at Cornell,
- [00:09:09.270]and we have to take any questions.
- [00:09:26.410]Yes.
- [00:09:31.410]Or like, did you ever find like a minimum resolution
- [00:09:35.170]of the particle sizes?
- [00:09:39.330]Typically, we do like--
- [00:09:41.790]yes, so the question is, did we find any minimum resolution
- [00:09:45.470]of the particle sizes?
- [00:09:47.130]Typically, what this neural network does
- [00:09:48.750]is it does 4x upscaling.
- [00:09:51.410]So you have the low resolution images
- [00:09:54.010]that we fed in were 200 by 200.
- [00:09:56.090]And we produced an 800 by 800 upscale image.
- [00:10:01.110]There were a lot of problems throughout the project,
- [00:10:03.070]because Google Co-op does not make this very easy,
- [00:10:07.990]because it takes a long time.
- [00:10:09.150]You just have one GPU.
- [00:10:10.650]And that's all you can use, unless you have lots of money
- [00:10:13.910]and have multiple GPUs.
- [00:10:15.950]And you will only be able to get 200 by 200,
- [00:10:20.170]because any figure, we run out of RAM and memory.
- [00:10:22.650]And that's not good.
- [00:10:25.390]Thank you.
- [00:10:25.770]Thank you.
- [00:10:27.970]Yes.
- [00:10:28.470]So you're training this using the .
- [00:10:33.390]I'm wondering if you've used other materials
- [00:10:37.050]in some of the You would have to reach
- [00:10:42.370]when you have a model without the features,
- [00:10:45.170]and then .
- [00:10:55.450]Yeah, so the question is, would we
- [00:11:00.730]have to retrain the neural network
- [00:11:02.130]if we have different features of interest we're interested in?
- [00:11:05.170]Typically, yes, we would have to retrain.
- [00:11:07.770]There's a certain part of neural networks where you build up,
- [00:11:11.970]and they learn basic shapes.
- [00:11:14.610]And then you later train on, and they learn specific things
- [00:11:18.650]you're interested in.
- [00:11:20.010]That's why I decided to do a simple inference with ESRGAN
- [00:11:24.130]based off of what it's trained.
- [00:11:25.130]It's trained on anime images, which are not biomedical images,
- [00:11:29.130]because it can give some baseline performance,
- [00:11:32.210]because I needed to use that to show a proof of concept
- [00:11:34.890]for this micro-particle clusters.
- [00:11:36.930]Without that, it just looks to joys.
- [00:11:41.170]But yeah, if there's anything else we're interested in,
- [00:11:43.650]we could just retrain this super resolution network,
- [00:11:46.250]or also look at different types of backends.
- [00:11:50.490]Because this is a generative adversarial network.
- [00:11:52.610]There's also diffusion models.
- [00:11:54.810]something might work better than this one.
The screen size you are trying to search captions on is too small!
You can always jump over to MediaHub and check it out there.
Log in to post comments
Embed
Copy the following code into your page
HTML
<div style="padding-top: 56.25%; overflow: hidden; position:relative; -webkit-box-flex: 1; flex-grow: 1;"> <iframe style="bottom: 0; left: 0; position: absolute; right: 0; top: 0; border: 0; height: 100%; width: 100%;" src="https://mediahub.unl.edu/media/23120?format=iframe&autoplay=0" title="Video Player: Identifying Microparticle Clusters in High Resolution: Computer Vision Applied to Polymer Particles in Liquid Crystal (LC) to Enable On-the-Fly Characterization of Their Morphology and Size" allowfullscreen ></iframe> </div>
Comments
0 Comments