December 5, 2014 – List of Working Projects for Fall 2014 and Concluding Remarks
The purpose of this post is to simply state what projects I have been working on during the semester. I focused nearly all my efforts into my independent project, the rapid urine test strip reader app. This would amount to around ninety hours of work. I did also aid Lindsey in modifying the FRIome survey, though I only contributed an hour’s worth of time in providing answers in a cognitive interview. This is summarized in the table below:
Project | Hours Spent |
Rapid Urine Test Strip Reader App | 90 |
FRIome | 1 |
In addition, below is a list of all the locations of all the materials I worked with on the urine test strip reader project. Neither of my materials requires any special storage needs. Just keep them the strips at room temperature.
Material | Location |
Urine Test Strips | Drawer beneath Lab Computer (In Bench Spaces) |
Urine Test Strip Placement Mat | Drawer beneath Lab Computer (In Bench Spaces) |
In addition, a link to my code can be found here, and the working document that serves as the template for the placement mat is attached below:
To a future student interested in working on this project, my code currently works better on iPhone’s than other mobile devices. My algorithm currently does a good job of identifying columns but not rows. Ideally, the computer code will be able to identify the edges of the placement mat and then be able to identify the locations of the test strip and calibrator panels. In doing so, my algorithm hopes to be able to compare the colors of the test strip panels to those of the calibrator panels right above them and thus produce a diagnostic reading. That was my vision for the project, but feel free to take it in a new direction if you wish. Hope you can one day improve the accessibility and accuracy of urine diagnostics.
November 25, 2014 – Rapid Urine Test Strip Reader App
Because I was unable to come into lab when Vatsal, Haley, or Dr. Riedel were available to help me debug my code, I decided to focus my efforts into additional testing. I recruited my friend to take a couple pictures of the placement mat, while giving her the same instructions as I did to Alisha several weeks ago. The purpose of this additional testing was to ensure that the app is easy to use for different users and that the pictures taken by different users are able to be processed correctly by my computer algorithm.
Unfortunately, what I found was that my code does not hold up. My friend used a Samsung phone, the first one on which I have tested my app. What I noticed while my friend was taking the photo was that on the Samsung phone, the camera screen spans the entire screen. This is in contrast to the iPhone camera switch in which the sides are blocked by black bars. Thus, as a result, as shown in the picture on the left, when the user aligns the edge of the blue box to his/her camera screen on a Samsung device, the picture edge lines up exactly with the blue box. However, as shown in the picture right, when the user performs the same procedure on an iPhone, the picture edge ends up slightly left of the box because the black bar on the camera screen had left some additional room on the side.
This caused my algorithm to not be able to find the left edge of the blue box as shown in the picture below. I am currently in the process of debugging this, but I will most likely ask Vatsal, Haley, or Dr. Riedel for help next week. I may also entertain the idea of redesigning my placement mat, as perhaps by extending the vertical edges of the blue box, my algorithm may more easily recognize those edges. I am really discouraged by these results at the moment, but hopefully Thanksgiving will be able to help me re-find my motivation. In the mean time, I will try to code my Instructions page, and I will be attempting to debug my code.
As a reminder, a link to my code can be found here.
November 23, 2014 – Rapid Urine Test Strip Reader App
Today I modified my placement in an attempt to make the top and bottom edges of the box more easily recognized by my computer algorithm. Again, the purpose of this was to aid in the location of the calibration and test strip panels on the placement mat.
I thought by extending the edges of the box as shown in the picture above, the algorithm would more easily be able to identify the top and bottom edges as the rows with the most saturated pixels. However, this failed to produce fruitful results. In closer examination of my code, I realized that my code consistently outputs the first row as that with most saturated pixels. In addition, I discovered that my algorithm currently seems to recognize that the first row always has 599 or 600 saturated pixels. (Each row only has 600 pixels total.) Although I am currently unable to figure out why my algorithm is producing such results, I plan to ask Dr. Riedel, Vatsal, or Haley this coming week for help in debugging this problem. Afterwards, I will then be able to write the code that actually compares the panels and start recruiting classmates to test the consistency and accuracy of my code.
As always, a link to my code can be found here.
November 22, 2014 – Rapid Urine Test Strip Reader App
Today I focused on debugging the issues with my algorithm that sought to identify the column and row with the least number of pixels that had saturation values below 10%. The purpose of this algorithm is to identify the edges of the blue box of the placement mat and thus help to locate the relative locations of the test strip and calibration panels.
As a reminder, as of my last post, my code was returning “NaN” and “-1” values when executing. Today, I identified the main source of this error to discrepancies between comparing integers and strings. Basically, my original code had been trying to find a string that read “1” when all the values in the arrays had been integers. Thus, after much time spent identifying this error, I was able to correct my code to return real values.
My algorithm currently works fairly well identifying columns of the box as shown in the figures below which represent two processed images taken by two different users. Because the left and right sides of the box should theoretically be identical and thus have equal numbers of highly saturated pixels, I modified my algorithm to identify only the column with the most saturated pixels located between 50 and 150 pixels from the left side of the canvas. I believe this parameter should be robust enough to handle small deviations in the location of the left edge of the box due to variations in how the user takes the picture, but I will definitely test this theory next week. I also then counted the number of pixels between each column of panels and then wrote a for loop that automatically drew vertical lines 26 pixels apart. As shown in the figures below, all the vertical lines drawn do cross some part of the calibration panels. I did notice some deviations between the two images, for although 26 pixels were used to space the vertical lines, it seems as though the spacing between the panels on the left image are not quite as large as those on the right image. If this issue proves to be a problem (as in the lines begin to fall outside of the calibration panels) during additional testing next week, I will be sure to add code that identifies the rightmost edge of the box and then calculate the number of pixels between each column of panels as a proportion of the box width to adjust for discrepancies in magnification.
Although I attempted to do the same for the rows, I noticed that my current algorithm does not do a good job of identifying the top and bottom edges of the box. I plan to go into lab tomorrow and modify my placement mat and extend the top and bottom edges so that my algorithm can more easily identify them as the rows with the most saturated pixels. I will then add to my algorithm to identify the number of pixels between each row of calibration panels. Afterwards, the next steps to my app are just extensive testing to make sure that my algorithm works for multiple pictures taken by multiple users.
As always, here is a link to my code.
11/05/14 – 11/12/14 WIKI CHECK #5
Check | Good (“G”), Needs work (“W“), Missing or Poor (“X”), | Comments |
Significant progress made since last Check (if no then grader stops and notifies RE) | W | Try to have at least one entry each week. |
Reverse Chronological | G | |
Readable/Understandable/Organized, separate experiments clearly indicated | G | Highlight different dates so you can find information more easily when you go back to past notes. |
All experiments have brief intro/purpose statement | G | |
Data summarized appropriately in a figure/table/image | G | |
All figures/tables/images fully labeled including controls | G | |
Data linked back to lab notebook | G | |
Units reported with all numbers | G | |
Results: Data interpreted for meaning with specific mention of controls | G | |
Results interpreted for deciding next steps | G | |
“G” x 1 pt, “X” x 0 pt, “W” x 0.5 pt | Total:9.5 | Grader Name:AI |
November 8, 2014 – Rapid Urine Test Strip Reader App
Today I began coding the mathematical algorithm that would be used to determine the location of the edges of the blue box on the placement mat as shown in the pictures from previous updates. The purpose of this algorithm to find the locations of the calibration and test strip panels so that they may then be compared and a diagnostic result outputted.
I decided to approach this section of the app by identifying which rows and columns contained the least number of pixels with saturation values of less than 10%. As described in previous posts, the pixels of the paper tend to have low saturation values, so by finding the rows and columns with the least number of such low saturation pixels, my app is thus identifying the rows and columns in which the calibration and test strip panels are located. My algorithm currently calculates the saturation value of each pixel, and if the saturation value is less than 10%, my code adds a value of one to a corresponding index in two arrays: rows[] and columns[]. In doing so, my code will be able to tally the number of pixels with low saturation values in each row and column. I have also attempted to modify my code to identify the row and column which the least amount of pixels with low saturation values, but I do know that there is currently an error in this section of the algorithm. I will be fixing this as well as testing whether my algorithm is even adding to the rows[] and columns[] arrays properly next week.
As a reminder, my code may be accessed here.
November 7, 2014 – Rapid Urine Test Strip Reader App
Today I had Alisha help me test my app in unfamiliar lighting. The purpose of this was to make sure that the app is easy to use and to ensure that changes in lighting do not significantly impact saturation values and consequently, the detection of the blue boxes in my placement mat. It turned out that the lighting in the hallway outside the lab produced even better results than those produced in lab lighting. The results of my app algorithm are shown in the picture below:
I did change my algorithm to change all pixels with saturation values less than 10% to white. While this fails to distinguish between the natural white pixels of the paper and the modified pixels, the point is that the algorithm still recognizes the blue boxes in the lighting of the Painter hallway. Moreover, the lines of the boxes are fairly parallel to the edges of the picture, and Alisha mentioned that she did not take too long in taking the picture. Thus, it seems that the current approach of the app and placement mat is fairly easy to use. I am currently still in the process of coding the algorithm that compares the colors of the calibration chart to those of the test strip. I will be coding the section of my algorithm that what column and row the lines of the blue box are tomorrow so that the app may be able to identify the locations of the calibration and test strip panels.
November 1, 2014 – Rapid Urine Test Strip Reader App
Today I experimented with modifying the design of my placement mat. The purpose of this modification was to address the issues regarding the recognition of the blue box as described in my previous post. I replaced the larger black box with two black bars above and to the left of the blue box as shown in the picture below:
The purpose of these two black bars was to simply allow the user to align his/her camera with these bars in order to avoid crooked pictures. I then adjusted my computer algorithm to also calculate the lightness and saturation of each pixel of the image. Since black boxes have very low saturation values, I hoped that I would be able to write an algorithm that identified the black boxes and thus identified the location of the urine test strip and the calibration panels based on the location of these black boxes. My preliminary algorithm produced this result:
All the pixels deemed by the algorithm to have a saturation value less than 10% were highlighted red. My algorithm clearly demonstrated that many grayish white pixels of the paper also had saturation values less than 10%. However, I noticed that the blue box and most of the calibration and test strip panels remained their original colors. This observation prompted me to change the black boxes to blue, as I thought that I may be able to identify blue boxes. This change produced this result:
As shown in the image above, all of the blue boxes remained intact. This is a great success, as I believe I may now be able to write a few complex mathematical algorithms to find the columns and rows with the least red (and therefore the least number of pixels with saturation values less than 10%). In doing so, I can then identify the columns and rows where there are calibration/test strip panels and/or the blue boxes of the placement mat. I will be working on these algorithms this coming week, so that I may soon be able to begin coding the section of my code that compares the colors of the test strip to those of the calibration panels.
As a reminder, a link to my app can be found here.
October 31, 2014 – Rapid Urine Test Strip Reader App
I was finally able to fix my code! It turns out I was unintentionally commenting out parts of my code so that it was not executing properly. My code currently finds all the light blue pixels in an uploaded picture and turns them red. As a reminder, I was hoping to identify all the light blue pixels in the uploaded picture because I would thus be able to identify the location of the urine test strip and the calibration panels based upon the light blue box that surrounds them.
Several tests of my code has highlighted some important problems. Although I thought that roughly all shades of blue had hue values between 130 and 160, this clearly was not the case, for when I tried to convert all pixels with such hue values in an uploaded photo to a dark red color, this was the result:
As shown in the picture, not all the light blue pixels of the box were highlighted red, and what is even more concerning is that several seemingly random white pixels were perceived as blue by the computer algorithm and thus converted into red pixels. I am currently unsure as to how I might fix this. I am currently brainstorming new ideas for letting the computer algorithm know where the test strip and calibration panels are in each uploaded photograph. This will be my next step in the development of my app.
As a reminder, a link to my app can be found here.
October 25, 2014 – Rapid Urine Test Strip Reader App
Disclaimer: I was unable to devote much time to my application because six hours of my lab time had been devoted to buying materials for and hosting CNS Family Weekend.
I was unfortunately unable to come in this week at times when Vatsal, Haley, or Dr. Riedel were in lab; thus, I was unable to ask any of them for coding assistance. I am still stuck on the algorithm that is comparing the converted hue values of each pixel to the color blue. After much testing, I now believe there is something wrong with my if statement:
if(hueColor >=135 && hueColor <=160)
{
imageInfo.data[i] = 160;
imageInfo.data[i + 1] = 0;
imageInfo.data[i + 2] = 0;
}
This version of my code is simply supposed to change all the blue pixels into a red pixel. Although my ultimate goal is not to modify images, I thought that by editing my code so that it modifies the pixels identified as blue, I would better be able to assess whether my application is even identifying the correct pixels. (As a reminder, I wish to identify the blue pixels because the edges of the blue box on my test strip placement mat (as displayed in the post from October 17) signal where the image analysis algorithm should actually begin to analyze the urine test strip colors.) When I take out the if statement, all the pixels of the original image are converted into red pixels. However, when I add any sort of if statement, the canvas remains blank as if the computer can no process any aspect of my image display algorithm. I am still unsure as to why this is, but hopefully this coming week I will have more than two hours I can devote to my app, and I will be able to come into lab when someone might be able to help me.
The link to my GitHub code is here.
October 18, 2014 – Rapid Urine Test Strip Reader App
Today I attempted to begin coding the image analysis portion of my application; however, I found little success. My current idea is to convert RGB values into one hue value and then to compare all hue values down a vertical line (since all my panels are aligned correctly on my placement mat as shown in the picture from my previous update). Thus, today, I experimented with drawing vertical lines down the HTML Canvas. I was able to do this successfully. Then, I attempted to implement an algorithm that converted the RGB values of each pixel of my Canvas into hue values. This was taken from Dr. Riedel’s Primary Colors app. This worked fine as well. However, when I implemented an algorithm that compared the converted hue values of each pixel to the color blue (this was done to attempt to recognize the edges of the blue box as shown in the picture from my previous update), I inadvertently created an infinite for loop that I was unable to figure out. Thus, currently, my code currently is unable to even display the images it previously was able to. In this coming week, I hope to seek help from Dr. Riedel and the CS students in order to solve this dilemma. After fixing this problem, I hope to be able to then write the if statements that will create outputs that explain the results of the comparisons between reagent strip panels with those of the calibrator.
A link to my code is here.
October 17, 2014 – Rapid Urine Test Strip Reader App
This week I recreated the placement mat for my rapid urine test strip reader app. The purpose of this placement mat is to standardize the photo inputs of the application. This standardization makes my previous idea of finding and analyzing the various panels of the urine test strip by counting pixels from the edges of the photograph possible because theoretically, all photographs now will have a similar number of pixels between edges and test strip panels. After discussing the concept of a standardized placement mat with Dr. Riedel, I decided to recreate the placement mat to correct for previous measurement errors and to include a calibration chart. This represents an added benefit of the placement mat: it allows the app to take into account the light and ambiance of the surroundings. A photo of my placement mat with a urine test strip is shown below:
Ideally, the edges of the taken photograph should be aligned with the edges of the black box. However, after attempting to take a photo of a test strip on my placement mat, I realized that it is very difficult to take a good photo that aligns with the edges for several reasons: On an iPhone 4, the edges shown on the screen are not the actual edges of the photograph. In addition, it is very difficult to level your hands and camera so that the edge of the photograph is parallel with the entire line. Thus, these represent possible errors in the future results of my app. As I begin coding the image analysis portion of my app this weekend and next week, I aim to test to see if these flaws will significantly impact the output of my app. Since I have no ideas how to address this problem at the moment, I aim to simply go along with what I have and hope that such errors will fall into an acceptable and negligible margin of error.
10/16/14 Wikispaces Check #3
Check | Good (“G”), Needs work (“W“), Missing or Poor (“X”), | Comments |
Significant progress made since last Check (if no then grader stops and notifies RE) | W | Good results, but there needs to be at least another small entry or two for full credit here. |
Reverse Chronological | G | |
Readable/Understandable/Organized, separate experiments clearly indicated | G | |
All experiments have brief intro/purpose statement | G | |
Data summarized appropriately in a figure/graph/image | G | |
All figures/graphs/images fully labeled including controls | G | |
Data linked back to lab notebook | G | |
Units reported with all numbers | G | |
Results: Data interpreted for meaning with specific mention of controls | G | |
Results interpreted for deciding next steps | G | |
“G” x 1 pt, “X” x 0 pt, “W” x 0.5 pt | Total:9.5 | Grader Name:DTO |
October 11, 2014 – Rapid Urine Test Strip Reader App
This week I began coding again on Google Drive. I was finally able to create five Canvases on a single page, and all five of these Canvases display the pictures that are uploaded! As a reminder, I need five pictures because the urine strip needs to be read at five different time intervals, and I thought it would beneficial for the user to see a picture of the urine strip displayed next to the results. It is interesting to note, however, that the pictures are only displayed when the user accesses the Pictures and Results page directly (Here’s a link.). When the user begins at the Introduction page and clicks through to the Instructions Page to the Pictures and Results page, the pictures fail to display. (Try it here.) I am unsure of why this error occurs, but I will be looking into it in the future. For now, I want to focus on the image analysis algorithm.
There are currently several issues with image analysis. First, I discovered that the getImageData method for Canvas only returns RGBA values rather than RGB/HSL values I had originally expected. Secondly, I compared the pictures displayed on the screens of an iPhone 4 and iPhone 5 and found that they showed differences in the sizes of the objects the camera was pointed toward. This creates problems because I had originally hoped to simply find the panels of the strips in the picture by counting the number of pixels over from the edge. (I created a strip placement mat that would standardize the distances of the strip from the edges of the picture. A picture of the mat is displayed below.) However, if the sizes of the objects vary depending on camera, this method may fail to work. I will definitely investigate this matter more fully next week. I may still try the pixel counting method, as I suspect that maybe the difference between iPhone 4 and iPhone 5 can be accommodated for by calculations that take into account camera resolution. For now, I am still celebrating my success in displaying all five pictures onto the same page.
Here’s a picture of the strip placement mat I created. I measured the dimensions of the test strips to create the boxes for placement. However, I still need to adjust some dimensions of the larger box in which the frame of the camera should capture. I will be doing that this coming week.
10/8/14 == 10/10
Check | Good (“G”), Needs work (“W“), Missing or Poor (“X”), | Comments |
Significant progress made since last Check (if no then grader stops and notifies RE) | G | |
Reverse Chronological | G | |
Readable/Understandable/Organized, separate experiments clearly indicated | G | |
All experiments have brief intro/purpose statement | G | |
Data summarized appropriately in a figure/graph/image | G | make tables into figures soon |
All figures/graphs/images fully labeled including controls | G | |
Data linked back to lab notebook | G | |
Units reported with all numbers | N | what are units of RGB? |
Results: Data interpreted for meaning with specific mention of controls | G | |
Results interpreted for deciding next steps | G | |
“G” x 1 pt, “X” x 0 pt, “W” x 0.5 pt | Total:9.5 | Grader Name:TR |
October 3, 2014 – Rapid Urine Test Strip Reader App
Due to technical difficulties regarding GitHub, I decided to focus my efforts this week on a non-coding aspect of my project. Because my app eventually seeks to compare the colors of a used test strip with those on a key in order to output results, the purpose of this week’s efforts was to obtain the attributes (red, green, blue, hue, saturation, luminance values) of the key. In doing so, when I am able to code again next week, I will be able to use these values in order to determine an image processing algorithm to include in the Input/Output page of my diagnostic app.
Glucose
Strip Reading | Red | Green | Blue | Hue | Saturation | Luminance |
Negative | 144 | 208 | 182 | 104 | 97 | 166 |
100 mg/dL | 140 | 195 | 134 | 76 | 81 | 155 |
250 mg/dL | 127 | 171 | 94 | 63 | 75 | 125 |
500 mg/dL | 140 | 143 | 68 | 42 | 85 | 99 |
1000 mg/dL | 127 | 115 | 55 | 33 | 95 | 86 |
2000+ mg/dL | 120 | 86 | 59 | 18 | 82 | 84 |
Bilirubin
Strip Reading | Red | Green | Blue | Hue | Saturation | Luminance |
Negative | 241 | 225 | 159 | 32 | 179 | 188 |
Small | 237 | 215 | 163 | 28 | 161 | 188 |
Moderate | 205 | 193 | 154 | 31 | 81 | 169 |
Large | 193 | 172 | 149 | 21 | 63 | 161 |
Ketone
Strip Reading | Red | Green | Blue | Hue | Saturation | Luminance |
Negative | 212 | 179 | 148 | 19 | 102 | 169 |
Trace (5mg/dL) | 230 | 173 | 154 | 10 | 145 | 181 |
Small (15 mg/dL) | 214 | 128 | 127 | 0 | 124 | 160 |
Moderate (40 mg/dL) | 180 | 99 | 105 | 237 | 84 | 131 |
Large (80 mg/dL) | 133 | 71 | 84 | 232 | 73 | 96 |
Large (160 mg/dL) | 97 | 49 | 61 | 230 | 79 | 69 |
Specific Gravity
Strip Reading | Red | Green | Blue | Hue | Saturation | Luminance |
1.000 | 14 | 70 | 69 | 119 | 160 | 40 |
1.005 | 38 | 113 | 72 | 98 | 119 | 71 |
1.010 | 93 | 122 | 78 | 66 | 53 | 94 |
1.015 | 118 | 134 | 57 | 48 | 97 | 90 |
1.020 | 140 | 147 | 54 | 43 | 111 | 95 |
1.025 | 139 | 143 | 48 | 42 | 119 | 90 |
1.030 | 177 | 160 | 52 | 35 | 131 | 108 |
Blood
Strip Reading | Red | Green | Blue | Hue | Saturation | Luminance |
Negative | 228 | 189 | 62 | 31 | 181 | 136 |
Non-Hemolyzed Trace | ||||||
Non-Hemolyzed Moderate | ||||||
Hemolyzed Trace | 191 | 189 | 66 | 39 | 119 | 121 |
Small | 146 | 171 | 69 | 50 | 102 | 113 |
Moderate | 87 | 136 | 70 | 70 | 77 | 97 |
Large | 58 | 84 | 59 | 82 | 44 | 67 |
Note: I am currently unsure of how I should proceed in creating an algorithm to interpret non-hemolyzed blood. It is difficult to identify non-hemolyzed blood because it is not of a uniform color; instead, it is identified by blotches of black in the reagent pad. I am considering using an algorithm that identifies non-hemolyzed blood by determining the amount of variation in color of each reagent pad. The greater the variation, the greater the amount of non-hemolyzed blood. I will look more into this in the coming weeks when I begin coding again.
Protein
Strip Reading | Red | Green | Blue | Hue | Saturation | Luminance |
Negative | 217 | 231 | 117 | 45 | 169 | 164 |
Trace | 186 | 211 | 109 | 50 | 129 | 151 |
30 mg/dL | 159 | 187 | 113 | 55 | 85 | 141 |
100 mg/dL | 148 | 189 | 149 | 81 | 57 | 159 |
300 mg/dL | 111 | 174 | 153 | 107 | 67 | 134 |
2000+ mg/dL | 91 | 154 | 137 | 109 | 62 | 115 |
Urobilinogen
Strip Reading | Red | Green | Blue | Hue | Saturation | Luminance |
0.2 mg/dL (Normal) | 254 | 201 | 151 | 19 | 235 | 191 |
1 mg/dL (Normal) | 251 | 169 | 147 | 8 | 223 | 187 |
2 mg/dL | 235 | 146 | 142 | 2 | 168 | 177 |
4 mg/dL | 240 | 138 | 134 | 2 | 187 | 176 |
8 mg/dL | 232 | 107 | 137 | 230 | 175 | 160 |
Nitrite
Strip Reading | Red | Green | Blue | Hue | Saturation | Luminance |
Negative (No Pink) | 252 | 255 | 215 | 43 | 240 | 221 |
Positive (Least Pink) | 255 | 238 | 205 | 26 | 240 | 216 |
Positive (Most Pink) | 255 | 205 | 197 | 6 | 240 | 213 |
Note: The presence of nitrite is determined by the presence of pink in the reagent pad. There is a range of various shades of pink, and thus, there is a range of color values for which the diagnostic app needs to return a positive reading for nitrite. On the key, there were two panels of pink–one nearly white like the negative reading and one mostly pink. I will set these two values as the boundaries for what constitutes as pink.
Leukocytes
Strip Reading | Red | Green | Blue | Hue | Saturation | Luminance |
Negative | 230 | 234 | 183 | 43 | 132 | 196 |
Trace | 212 | 208 | 179 | 35 | 67 | 184 |
Small | 185 | 180 | 160 | 32 | 36 | 162 |
Moderate | 144 | 118 | 145 | 199 | 26 | 124 |
Large | 129 | 104 | 146 | 184 | 40 | 118 |
My app will seek to obtain the same color attributes from the pictures taken of diagnostic reagent pads. Then, it will compare the color of the strip to the color of the key by obtaining a distance value. This distance value will be calculated by taking the square root of differences in each attribute between the submitted picture and the key. The app will then output the strip reading with the minimal distance value. As aforementioned, I will be working on putting this mathematical logic into computer code this coming week.
September 27, 2014 – Rapid Urine Test Strip Reader App
In the past twenty-four hours, I ordered my test strips, which will be coming in late October, and I was able to find a urine test key that granted more insight into coding the third page of my app: Input/Output. I noticed that the two panels (glucose and bilirubin) are read at 30 seconds, one at 40 (keton), one at 45 (specific gravity), five at 60 (blood, pH, protein, urobilinogen, and nitrate), and one at 120 (leukocytes). Thus, I have designed my webpage to prompt the user for five files, which will then be displayed on five Canvases. Although I was able to get the Canvases and file upload buttons to work, I had little success with anything else. I was unable to figure out how to display the uploaded file onto the Canvas. In frustration, I attempted to work on another segment of my code, and I tried to research how to execute scripts at set intervals. I found a couple helpful websites, but I did not have time to actually begin coding. I will try to meet with Dr. Riedel in order to figure out more about displaying the uploaded files onto Canvas and look more into the timer this coming week.
Here is a direct link to the Input/Output page of my code.
September 26, 2014 – Rapid Urine Test Strip Reader App
This past week I finished coding the framework of my app. I created four pages–Introduction, Instructions, Input/Output, and Results Summary–that contain working links guiding the user from one page to the next. I decided to code all of this in HTML5 and JavaScript, and thus, each page is actually a separate HTML file in my Google Drive folder. I believe having multiple pages helps to compartmentalize the code into more manageable chunks, and I believe this makes coding easier, as I do not have to learn additional programming languages such as JQuery Mobile. (However, I will utilize JQuery Mobile should it be necessary.)
This coming weekend, I will be primarily focusing on the image analysis aspect of my program. I will be analyzing an example app given to me by Dr. Riedel, and I am hoping to make significant progress on the Input/Output page of my program.
As a reminder, the link to my app is here.
September 19, 2014 – Rapid Urine Test Strip Reader App
After deciding that my independent project for the semester would involve making a urine test strip diagnostic reader, I created a storyboard of my app:
The storyboard shown at the bottom at the page (Page 104 of Lab Notebook) essentially lists four main pages of the application: an introduction page, an instruction page, a data intake/result output page, and a results summary page (an organized printout of the results). Each page would have a button that would guide the user to the next page when clicked. The most challenging part of this setup will be the third page. Ideally, the user will press “Choose Image,” select his/her picture, and the picture will then be displayed on the webpage on a Canvas. I am hoping to then display the results beside the picture, as I believe this will increase the believability of the app results, as any user can immediately compare the urine test strip picture to the result generated. I believe this may be doable via scripts, but I am still researching this aspect of the app. The results summary page is still up in the air, as I may simply include a results summary at the bottom of the third page.
I have also started work on the coding of the app itself. My app and code are viewable at here. The introduction page is essentially done with the exception of a button linking it to the next page. I will begin work on building the entire framework of the app (make three pages that are linked by buttons) and look more closely into the picture taking and result displaying features this coming week.
Lastly, I did have an idea of including automatic picture taking. This was inspired by the photobooth apps that can take pictures after a timer has been set. I would really like to implement this idea; however, I am not sure how well this would work on mobile devices because the camera is a separate application–I am not sure if the timer in my application would be able to still affect the camera when my application is only in the background. I will look more into this though, as I think this would be a beneficial feature that would minimize human timing errors.
9/15/14 WIKI RECHECK == 9.5/10
THIS IS GOOD WIKI WORK! (-1/2 point for making me have to do a recheck)
September 13, 2014 – Pipette Calibration Check
The purpose of this experiment was to assess the accuracy and precision of the micropipettes used in lab in order to ensure valid future measurements. Pipettes deemed inaccurate and imprecise are to be sent to a facility to be re-calibrated.
The results of the calibration check of the 100 uL pipette I1380677G and the 1000 uL pipette A1102747K are as follows:
Mean Weight (g) | Mean Volume (uL) | Standard Deviation (uL) | Coefficient of Variation (%) | Mean Error (%) | |
100 uL set at 10 uL | 0.0088 | 8.8 | 0.3 | 3.26 | -12 |
100 uL set at 100 uL | 0.0976 | 98 | 1 | 1.22 | -2 |
1000 uL set at 100 uL | 0.1014 | 101.8 | 0.3 | 2.96 | 1.8 |
1000 uL set at 1000 uL | 0.9823 | 980 | 20 | 1.87 | -2 |
- All values were calculated by using the formulas specified in page 2 of this protocol. All calculations can be viewed between pages 97 to 101 of my lab notebook.
- A Z-factor of 1.0040 was used in calculating the mean volume as the temperature at the time of experiment was 25 degrees Celsius.
- Although an evaporation rate of zero was used in calculating the mean volume, it was noted that while weighing the volumes of water that had been pipetted, the weight of the water consistently decreased as time passed, thus suggesting that the evaporation rate may have been non-zero.
- It was also noted that the 1000 uL pipette seemed to leak water at higher volumes.
Mean Volume (uL) | Acceptable Accuracy Range (uL) | Accurate/Not Accurate | |
100 uL set at 10 uL | 8.8 | 9.65 – 10.35 | Not Accurate |
100 uL set at 100 uL | 98 | 99.2 – 100.8 | Not Accurate |
1000 uL set at 100 uL | 101.8 | 97-103 | Accurate |
1000 uL set at 1000 uL | 980 | 992-1008 | Not Accurate |
Coefficient of Variation (%) | Acceptable Precision Range (%) | Precise/Not Precise | |
100 uL set at 10 uL | 3.26 | 1.0 | Not Precise |
100 uL set at 100 uL | 1.22 | 0.15 | Not Precise |
1000 uL set at 100 uL | 2.96 | 0.6 | Not Precise |
1000 uL set at 1000 uL | 1.87 | 0.15 | Not Precise |
- The acceptable accuracy and precision ranges were obtained from the values specified in page 14 of the aforementioned protocol.
Based on the results as shown above, both pipettes need to be sent to a facility for re-calibration. Neither pipette reliably transfers liquids at the specified volumes, and thus may skew future measurements and results without re-calibration. It is perhaps interesting to note that one setting (1000 uL set at 100 uL) tested provided accurate but imprecise measurements. Although its mean volume falls within the acceptable accuracy range, this pipette is still by no means acceptable, as it is completely unreliable due to its large variance. However, it should be noted that accuracy and precision still both rely heavily on the attentiveness of the micropipette user. For instance, the inaccuracies shown above may have simply been caused by human errors in reading measurements, variations in pre-rinsing the pipette tips (I personally made sure to rinse each tip three times with the maximum volume of water, though it was unspecified in the protocol.), or perhaps even unintentionally warming the pipette tips (thus increasing the evaporation rate) by holding them too long. Thus, while the micropipettes I1380677G and A1102747K do need to be recalibrated before future use, students should also always be conscientious of possible human error.
9/12/14 == 1/10
THIS IS UNACCEPTABLE WIKI WORK.
Check | Good (“G”), Needs work (“W“), Missing or Poor (“X”), | Comments |
Significant progress made since last Check (if no then grader stops and notifies RE) | W | This wiki makes it look like you’ve put in about an hour of work! |
Reverse Chronological | ||
Readable/Understandable/Organized, separate experiments clearly indicated | ||
All experiments have brief intro/purpose statement | ||
Data summarized appropriately in a figure/graph/image | ||
All figures/graphs/images fully labeled including controls | ||
Data linked back to lab notebook | ||
Units reported with all numbers | ||
Results: Data interpreted for meaning with specific mention of controls | ||
Results interpreted for deciding next steps | ||
“G” x 1 pt, “X” x 0 pt, “W” x 0.5 pt | Total:0 | Grader Name:TR |
September 11, 2014 – Plans for the Semester (Page 95 of Lab Notebook)
After much brainstorming, I have decided to focus my efforts into a developing an app that can analyze urine test strip results. This was primarily inspired by Dr. Riedel’s work on an aquarium diagnostic as well as my by own experience in working in a volunteer medical clinic over the summer. As someone who is very overanxious and worried that I may be misinterpreting colors and therefore misdiagnosing symptoms, I personally would love an app that limits subjectivity and standardizes the urine test strip analysis. I think this app would be very useful in the current healthcare environment, especially to those who may be colorblind or simply overanxious like myself.