A system for monitoring ocular movement can comprise a housing, a plurality of light sources , at least one imager , and a controller . The housing can define a cavity configured to allow each eye of a patient to view an interior region of the housing. The plurality of light sources can be oriented within the interior region of the housing . The at least one imager can be oriented to capture an image of an eye of a patient during an evaluation . The at least one controller can comprise at least one processor and a non - transitory computer readable medium storing instructions . The instructions can be executed by the processor and cause the controller to receive image data from the at least one imager and illuminate the plurality of light sources in a predetermined and reconfigurable sequence.
US Patent: US11185224B2
In a method of generating a neural network used to detect a feature of medical significance from a body image data input, test data images are divided into patches. Each patch is labelled as either corresponding to the feature or not corresponding to the feature. One trained fully connected layer in a pretrained general purpose convolutional neural network is replaced with a new fully connected layer. The pretrained convolutional neural network is retrained with the set of labelled patches to generate a feature-specific convolutional neural network that includes at least one feature specific fully connected layer that maps the body image data to the feature of medical significance when the feature of medical significance is present in the body image data input.
US Patent: US12159229B2
WIPO Patent: WO2020243460A1
In a method for determining if a test data set is anomalous in a deep neural network that has been trained with a plurality of training data sets resulting in back propagated training gradients having statistical measures thereof , the test data set is forward propagated through the deep neural network so as to generate test data intended labels including at least original data , prediction labels , and segmentation maps . The test data intended labels are back propagated through the deep neural network so as to generate a test data back propagated gradient . If the test data back propagated gradient differs from one of the statistical measures of the back propagated training gradients by a predetermined amount , then an indication that the test data set is anomalous is generated . The statistical measures of the back propagated training gradient include a quantity including an average of all the back propagated training gradients .
US Patent: US20220327389A1
WIPO Patent: WO2021046300A1
Neural networks and learning algorithms can use a variance of gradients to provide a heuristic understanding of the model. The variance of gradients can be used in active learning techniques to train a neural network. Techniques include receiving a dataset with a vector. The dataset can be annotated and a loss calculated. The loss value can be used to update the neural network through backpropagation. An updated dataset can be used to calculate additional losses. The loss values can be added to a pool of gradients. A variance of gradients can be calculated from the pool of gradient vectors. The variance of gradients can be used to update a neural network.
US Patent: US12079738B2
Germany Patent: DE102022102929A1
China Patent: CN114943264A
Traffic signs are critical for maintaining the safety and efficiency of our roads. Therefore, we need to carefully assess the capabilities and limitations of automated traffic sign detection systems. Existing traffic sign datasets are limited in terms of type and severity of challenging conditions. Metadata corresponding to these conditions are unavailable and it is not possible to investigate the effect of a single factor because of simultaneous changes in numerous conditions. To overcome the shortcomings in existing datasets, we introduced the CURE-TSD-Real dataset, which is based on simulated challenging conditions that correspond to adversaries that can occur in real-world environments and systems.
Dataset: Google form for dataset access
Paper: https://arxiv.org/abs/1908.11262
We generated synthesized video sequences with a professional game development tool Unreal Engine 4 (UE4). We utilized UE4 to synthesize video sequences because of three main advantages. First, objects in UE4 have metadata including position, size, and bounding box, which eliminates the manual labeling process. Second, environmental conditions including weather, time, and lighting can be fully controlled in UE4. We can change specific environmental conditions and levels while fixing other parameters to perform controlled experiments. Third, UE4 is relatively user friendly and is supported by a large developer community. We started data generation process by creating a virtual city that included roads, street lamps, traffic signs, and background environment. We added global lighting sources to make virtual environment more realistic. We generated a car object and linked that object with two functions, one for obtaining the location and the other one for showing the object on the screen. We attached a camera object to the car object, designed a driving path, created a path follower, configured the car speed, and placed the vehicle on the designed path. To match the number of real- world sequences, we generated 49 simulated sequences, which leads to a total of 98 reference videos denoted as challenge-free sequences.
Dataset: Google form for dataset access
Paper: https://arxiv.org/abs/1902.06857
In this dataset, we investigate the robustness of traffic sign recognition algorithms under challenging conditions. Existing datasets are limited in terms of their size and challenging condition coverage, which motivated us to generate the Challenging Unreal and Real Environments for Traffic Sign Recognition (CURE-TSR) dataset. It includes more than two million traffic sign images that are based on real world and simulator data. We benchmark the performance of existing solutions in real-world scenarios and analyze the performance variation with respect to challenging conditions. We show that challenging conditions can decrease the performance of baseline methods significantly, especially if these challenging conditions result in loss or misplacement of spatial information. We also investigate the effect of data augmentation and show that utilization of simulator data along with real-world data enhance the average recognition performance in real-world scenarios.
Dataset: https://ghassanalregib.com/cure-tsr/
We utilized our patented lab-on-a-headset platform to perform automated light rexlex test and generated our RAPD dataset. Lab-on-a-headset is an ultra portable device with on- board processing capability, which enabled us to capture and preview patient videos during the clinical study. We predefined the test sequence by connecting to the headset and utilizing the graphical user interface that provides full control over the test stimuli. Tested subjects were stimulated with the automated light sequences and their pupillary reactions were recorded simultaneously as high definition streams. In Fig. 3, we show sample images captured with the introduced headset. We used the infrared frames to assess relative afferent pupillary defect in this study. We obtained approvals from the Institutional Review Board committees of Emory University and Georgia Institute of Technology. RAPD conditions of the subject were determined by clinicians with manual swinging flashlight test as well as neutral density filter test. The RAPD dataset used in study included 22 subjects, half of which correspond to a control group without RAPD whereas the other half have positive RAPD. Four out of ten males and seven out of twelve females have positive RAPD. Average age of the participants is around 52 years for no RAPD subjects and 56 years for RAPD positive subjects with a standard deviation of approximately 14 and 10, respectively.
Dataset: Not publicy available
Paper:https://arxiv.org/abs/1908.02300, https://arxiv.org/abs/1905.08886