Browse Category: Computer > Software

[Search within category]

(SD2020-340) Algorithm-Hardware Co-Optimization For Efficient High-Dimensional Computing

With the emergence of the Internet of Things (IoT), many applications run machine learning algorithms to perform cognitive tasks. The learning algorithms have been shown effectiveness for many tasks, e.g., object tracking, speech recognition, image classification, etc. However, since sensory and embedded devices are generating massive data streams, it poses huge technical challenges due to limited device resources. For example, although Deep Neural Networks (DNNs) such as AlexNet and GoogleNet have provided high classification accuracy for complex image classification tasks, their high computational complexity and memory requirement hinder usability to a broad variety of real-life (embedded) applications where the device resources and power budget is limited. Furthermore, in IoT systems, sending all the data to the powerful computing environment, e.g., cloud, cannot guarantee scalability and real-time response. It is also often undesirable due to privacy and security concerns. Thus, we need alternative computing methods that can run the large amount of data at least partly on the less-powerful IoT devices. Brain-inspired Hyperdimensional (HD) computing has been proposed as the alternative computing method that processes the cognitive tasks in a more light-weight way.  The HD computing is developed based on the fact that brains compute with patterns of neural activity which are not readily associated with numerical numbers. Recent research instead have utilized high dimension vectors (e.g., more than a thousand dimension), called hypervectors, to represent the neural activities, and showed successful progress for many cognitive tasks such as activity recognition, object recognition, language recognition, and bio-signal classification. 

(SD2019-340) Collaborative High-Dimensional Computing

Internet of Things ( IoT ) applications often analyze collected data using machine learning algorithms. As the amount of the data keeps increasing, many applications send the data to powerful systems, e.g., data centers, to run the learning algorithms . On the one hand, sending the original data is not desirable due to privacy and security concerns.On the other hand, many machine learning models may require unencrypted ( plaintext ) data, e.g., original images , to train models and perform inference . When offloading theses computation tasks, sensitive information may be exposed to the untrustworthy cloud system which is susceptible to internal and external attacks . In many IoT systems , the learning procedure should be performed with the data that is held by a large number of user devices at the edge of Internet . These users may be unwilling to share the original data with the cloud and other users if security concerns cannot be addressed.

(SD2019-275) Mixed-Signal Acceleration Of Deep Neural Networks

Deep Neural Networks (DNNs) are revolutionizing a wide range of services and applications such as language translation , transportation , intelligent search, e-commerce, and medical diagnosis. These benefits are predicated upon delivery on performance and energy efficiency from hardware platforms. With the diminishing benefits from general-purpose processors, there is an explosion of digital accelerators for DNNs. Mixed-signal acceleration is also gaining traction. Albeit low-power, mixed signal circuitry suffers from limited range of information encoding, is susceptible to noise, imposes Analog to Digital (A/D) and Digital to Analog (D/A) conversion overheads, and lacks fine-grained control mechanism. Realizing the full potential of mixed-signal technology requires a balanced design that brings mathematics, architecture, and circuits together.

(SD2020-332) F5‐HD: Fast Flexible FPGA‐based Framework for Refreshing Hyperdimensional Computing

Hyperdimensional (HD) computing is a novel computational paradigm that emulates the brain functionality in performing cognitive tasks. The underlying computation of HD involves a substantial number of element-wise operations (e.g., addition and multiplications) on ultra-wise hypervectors, in the granularities of as small as a single bit, which can be effectively parallelized and pipelined. In addition, though different HD applications might vary in terms of number of input features and output classes (labels), they generally follow the same computation flow. Such characteristics of HD computing inimitably matches with the intrinsic capabilities of FPGAs, making these devices a unique solution for accelerating these applications.

Mutation Organization Software for Adaptive Laboratory Evolution (ALE) Experimentation

Adaptive Laboratory Evolution (ALE) is a tool for the study of microbial adaptation. The typical execution of an ALE experiment involves cultivating a population of microorganisms in defined conditions (i.e., in a laboratory) for a period of time that enables the selection of improved phenotypes. Standard model organisms, such as Escherichia coli, have proven well suited for ALE studies due to their ease of cultivation and storage, fast reproduction, well known genomes, and clear traceability of mutational events. With the advent of accessible whole genome resequencing, associations can be made between selected phenotypes and genotypic mutations.   A review of ALE methods lists 34 separate ALE studies to date. Each study reports on novel combinations of selection conditions and the resulting microbial adaptive strategies. Large scale analysis of ALE results from such consolidation efforts could be a powerful tool for identifying and understanding novel adaptive mutations. 

IgEvolution: A Novel Tool for Clonal Analysis of Antibody Repertoires

Constructing antibody repertoires is an important error-correcting step in analyzing immunosequencing datasets that is important for reconstructing evolutionary (clonal) development of antibodies. However, the state-of-the-art repertoire construction tools typically miss low-abundance antibodies that often represent internal nodes in clonal trees and are crucially important for clonal tree reconstruction. Thus, although repertoire construction is a prerequisite for follow up clonal tree reconstruction, the existing repertoire reconstruction algorithms are not well suited for this task because they typically miss low-abundance antibodies that often represent internal nodes in clonal trees and are crucially important for clonal tree reconstruction.

Fast Deep Neural Network (DNN) Training/Execution on Hardware Platforms

With the growing range of applications for Deep Neural Networks (DNNs), the demand for higher accuracy has directly impacted the depth of the state-of-the-art models. Although deeper networks are shown to have higher accuracy, they suffer from drastically long training time and slow convergence speed with high computational complexity.

System And Method For Binaural Spatial Processing Of Audio Signals

Audio signal processing is the intentional modification of sound signals to create an auditory effect for a listener to alter the perception of the temporal, spatial, pitch and/or volume aspects of the received sound. Audio signal processing can be performed in analog and/or digital domains by audio signal processing systems. For example, analog processing techniques can use circuitry to modify the electrical signals associated with the sound, whereas digital processing techniques can include algorithms to modify the digital representation, e.g., binary code, corresponding to the electrical signals associated with the sound.  Binaural sound recordings are produced by a stereo recording of two microphones inside the ears of a human or a mannequin head. Such recordings include most cues for sound spatialization detected by humans, and thus, they are able to realistically transmit the localization of the recorded sounds, and in effect provide a three dimensional experience of the soundscape for the listener.

AI Enabled UAV Route-Planning Algorithm with Applications to Search and Surveillance

Portable UAVs such as quad-copters have made huge inroads in the last several years in various fields of aerial photography and surveillance. Drones can efficiently and cheaply hover over/follow a target of interest and capture unique perspectives of wildlife, real-estate, sporting events and operational environments such as law enforcement or military. More challenging however is the application of UAVs for large area search and surveillance. In these scenarios, a search pattern must be established which can cover many square miles and is far too expansive for a UAVs typical battery to sustain. To make UAVs more broadly effective in large area search and target identification, new path planning algorithms are needed to efficiently eliminate areas of low probability while focusing on search areas most likely to contain the subject of interest. Likewise, improved image classifiers are needed to aid in separating targets of interest from background terrain, thus expediting the search within given battery limitations

Augmented Reality For Time-Delayed Telsurgical Robotics

Teleoperation brings the advantage of remote control and manipulation to distant locations or harsh or constrained environments. The system allows operators to send commands from a remote console, traditionally called a master device, to a robot, traditionally called a slave device, and offers synchronization of movements. This allows the remote user to operate as if on-site, making teleoperational systems an ideal and often only solution to a wide range of applications such as underwater exploration, space robotics, mobile robots, and telesurgery. The main technical challenge in realizing remote telesurgery (and similarly, all remote teleoperation) is the latency from the communication distance between the master and slave. This delay causes overshoot and oscillations in the commanded positions, and are observable and statistically significant in as little as 50msec of round trip communication delay. Predictive displays are virtual reality renderings, generally designed for space operations, that show a prediction of the events to follow in a short amount of time. It can be used to overcome the negative effects of delay by giving the operator immediate feedback from a predicted environment. Furthermore, it does not suffer stability issues that arise with delayed haptic feedback. Early predictive displays included manipulation of the Engineering Test Satellite 7 from ground control where the round trip delay can be up to 7sec and Augmented Reality (AR) rendering where the prediction is overlaid on raw image data. These strategies can be applied to telesurgery, but require overcoming the unique challenges in calculating and tracking the 3D environment for a full environment prediction, which includes non-rigid material such as tissue. Furthermore, prior work in the surgical robotics community highlights the need for active tracking rather than only relying on kinematic calibrations to localize the slave due to the millimeter scale of a surgical operation and the often utilized cable driven actuation.

Source Tracking Though Spectral Matching To Mass Spec Databases

Modern metabolomics, proteomics and natural product datasets have now reached into the millions of tandem mass (MS/MS) spectra. The rapidly growing size of these datasets precludes laborious manual data interpretation of all of the data. While MS/MS spectral library search approaches match spectra in an automated fashion, the limited size of available spectral libraries limits identification rates of datasets to single digit percentages. In addition, the sharing of experimental MS/MS data between researchers is not that common. What is needed is a way to organize both identified and unidentified spectra into structurally related molecular families that is searchable.

Collimator/Image Reconstruction Molecular Breast Imaging

MBI and BSGI utilize γ-cameras in a mammographic configuration to provide functional images of the breast. Several studies have confirmed that MBI has a high sensitivity for the detection of small breast lesions, independent of tumor type. A large clinical trial compared MBI with screening mammography in over 1000 women with mammographically dense breast tissue and increased risk of breast cancer and showed that MBI detected two to three times more cancers than mammography. Despite these favorable results, BSGI and MBI have not been widely accepted for breast cancer screening due to greater effective radiation dose compared with mammography. Another disadvantage of MBI is long imaging time, causing discomfort to the patient. Furthermore, while digital breast tomosynthesis (DBT) produces 3D images, resulting in improved cancer detection over mammography, current clinical MBI and BSGI systems produce only 2D images. These disadvantages are due to the use of parallel hole collimator (PHC) with MBI and BSGI, which is inefficient, allowing only gamma rays traveling perpendicular to the detector to be recorded. Furthermore, PHA cannot produce a 3D image with a stationary detector and results in a loss of image resolution with increasing distance between the tumor and the gamma detector.

Local Binary Pattern Network (LBPN)

Convolutional Neural Networks (CNN) have had a notable impact on many applications. Modern CNN architectures such as AlexNet, VGG, GoogLetNet, and ResNet have greatly advanced the use of deep learning techniques into a wide range of computer vision applications. These gains have surely benefited from the continuing advances in computing and storage capabilities of modern computing machines. Memory and computation efficient deep learning architectures are an active area of research in machine learning and computer architecture. Model size reduction and efficiency gains have been reported by selectively using binarization of operations in convolutional neural networks that approximate convolution by reducing floating point arithmetic operations. 

DeepSign: Digital Rights Management in Deep Learning Models

As society becomes more and more complicated, we have also developed ways to analyze and solve some of these complexities via the convergence of the fields of artificial intelligence, cognitive science and neuroscience. What has emerged is the development of machine learning, which allows computers to improve automatically through experience. Thus, developers working on artificial intelligence (AI) systems have come forth to align AI with machine-learning algorithms to cover a wide variety of machine-learning problems. The most advanced of these are called supervised learning methods which form their predictions via learned mapping, which can include decision trees, logistic regression, support vector machines, neural networks and Bayesian classifiers. More recently, deep networks have emerged as multilayer networks involved in a number of applications, such as computer vision and speech recognition. A practical concern in the rush to adopt AI as a service is the capability to perform model protection: AI models are usually trained by allocating significant computational resources to process massive amounts of training data. The built models are therefore considered as the owner’s intellectual property (IP) and need to be protected to preserve the competitive advantage.

GPS-Based Miniature Oceanographic Wave Measuring Buoy System

Oceanic monitoring helps coastal communities, economies, and ecosystems thrive. The coastlines and open oceans prove to be very important to maritime countries for recreation, mineral and energy exploitation, shipping, weather forecasting and national security. As solar power, GPS, and telecomm improvements have been made, directional wave buoys have emerged and set the standard in wave monitoring. Non-directional and directional wave measurements are of high interest to users because of the importance of wave monitoring for successful marine operations. Wave data and climatological information derived from the data are also used for a variety of engineering and scientific applications.

Software for auto-generation of text reports from radiology studies

Imaging machines used for radiology studies often export data (such as vascular velocities, bone densitometry, radiation dose, etc.) as characters stored in image format. Radiologists are expected to interpret this data and also store it in their text-based reports of the studies. This is usually accomplished by dictating the data into the text report or copying it by typing it. However, these methods are error-prone and time-intensive.

Efficient Techniques For Dehazing Methods

Brief description not available

VIRBELA(TM)

Brief description not available

Method For Dynamic Intelligent Load Balancing

Brief description not available

  • Go to Page: