Browse Category: Computer > Software

[Search within category]

Technique for Safe and Trusted AI

Researchers at the University of California Davis have developed a technology that enables the provable editing of DNNs (deep neural networks) to meet specified safety criteria without altering their architecture.

Photonic Physically Unclonable Function for True Random Number Generation and Biometric ID for Hardware Security Applications

Researchers at the University of California, Davis have developed a technology that introduces a novel approach to hardware security using photonic physically unclonable functions for true random number generation and biometric ID.

Adversarial Resilient Malware Detector Based on Randomization

Researchers at the University of California, Davis have developed a machine learning (ML) malware detector based on a randomization technique to prevent cyberattacks on computer systems and networks.

Stochastic Route Planning For Electric Vehicles

Brief description not available

Haptic Smart Phone-Cover: A Real-Time Navigation System for Individuals with Visual Impairment

Researchers at the University of California, Davis have developed a haptic interface designed to aid visually impaired individuals in navigating their environment using their portable electronic devices.

Machine Learning And Attention For Intelligent Sensing

A revolutionary approach to sensor data processing that leverages bio-inspired computing for intelligent sensing.

Automatic Data Annotation and Self-Learning Models for Adaptive Machine Learning Applications

This technology introduces a novel end-to-end method for automatic data annotation and generation based on robust temporal causality among data streams, enhancing machine learning model accuracy and adaptability.

Automatic Data Annotation And Self-Learning Methods For Adaptive Machine Learning Applications.

This technology introduces a novel method for automatic data annotation and generation, enhancing machine learning model accuracy and adaptability

Ragman: Software Infrastructure For Ai Assistants

A software infrastructure designed to rapidly develop and test aligned conversational AI assistants for specific tasks.

Deployable Anonymity System: Introducing Sparta

Metadata is used to summarize basic information about data that can make tracking and working with specific data easier. Today’s communication systems, like WhatsApp, iMessage, and Signal, use end-to-end encryption to protect message contents. Such communication systems do not hide metadata, which is the data providing information about one or more aspects of such contents, like messages. Such metadata includes information about who communicates with whom, when, and how much, and is generally visible to systems and network observers. As a result, cyber risk associated with metadata leakage and traffic analysis remains a significant attack vector in such modern communication systems. Previous attempts to address this risk have been generally seen as not secure or prohibitively expensive, for example, by imposing inflexible bandwidth restrictions and cumbersome synchronous schedules globally, which cripples performance. Moreover, prior approaches relied on distributed trust for security, which is largely incompatible with conventional organizations hosting or using such apps.

Using Virtual Tile Routing For Navigating Complex Transit Hubs

Many people have learned to appreciate the advent of GPS based navigational applications in our daily lives through the use of street level navigation, and many more loathe the same applications when using them to navigate established public transportation systems. Many of these travelers become confused and frustrated when attempting to understand and act on the directions given to them by such existing applications that primarily focus on large-scale street navigation, especially if the user has a visual or cognitive impairment. Several existing applications will not even attempt to aid someone in the navigation of say, a metro, train or bus station, and instead simply inform the user of the label of the route that the application intends the user to take. Without any small-scale directions many people find themselves struggling to figure out what platform or boarding zone they need to use to get on their preferred method of transportation, as well as how to get to these platforms and boarding zones in the first place. These transit hubs, plazas, malls, and the like have long been a pain in the side of developers and users alike when it comes to navigation. Innovation has long been overdue in this space concerning small scale transit plaza navigation, with major players holding large market shares in navigation not even attempting to address this longstanding problem. The only existing application to offer indoor navigation offers very limited as well as inconsistent functionality including only two-dimensional indoor mapping, due to manually uploaded floor plans that are only available in the first place from partnering locations. This has continued to be an issue due to a lack of adoption by existing locations, as each location is required to draw out their floor plan on an antiquated image file and submit it for approval. Solving this problem would ease a large amount of stress for those navigating in areas they are not familiar with, as well as saving time that could possibly make the difference between a missed train and a nearly missed train.

Fast and Accurate Cardinality Estimation of Multi-Join Queries on Streams and Databases

Efficiently analyzing large volumes of information, as found in streaming data and big data applications, requires accurate cardinality estimates. This invention is capable of more accurately estimating cardinalities while using little memory and compute, as a result, speeding up query evaluation by as much as 50%.

Learned Image Compression With Reduced Decoding Complexity

The Mandt lab introduces a novel approach to neural image compression, significantly reducing decoding complexity while maintaining competitive rate-distortion performance.

Multi-Dimensional Computer Simulation Code For Proton Exchange Membrane (Pem) Electrolysis Cell (Ec) Advanced Design And Control

Polymer electrolyte membrane (PEM) electrolyzers have received increasing attention for renewable hydrogen production through water splitting. In order to develop such electrolyzers, it is necessary to understand and model the flow of liquids, gases, and ions through the PEM. An advancedmulti-dimensional multi-physics model is established for PEM electrolyzer to describe the two-phase flow, electron/proton transfer, mass transport, and water electrolysis kinetics.

Methods and Computational System for Genetic Identification and Relatedness Detection

Deoxyribonucleic acid- (DNA-) based identification in forensics is typically accomplished via genotyping allele length at a defined set of short tandem repeat (STR) loci via polymerase chain reaction (PCR). These PCR assays are robust, reliable, and inexpensive. Given the multiallelic nature of each of these loci, a small panel of STR markers can provide suitable discriminatory power for personal identification. Massively parallel sequencing (MPS) technologies and genotype array technologies invite new approaches for DNA-based identification. Application of these technologies has provided catalogs of global human genetic variation at single-nucleotide polymorphic (SNP) sites and short insertion-deletion (INDEL) sites. For example, from the 1000 Genomes Project, there is now a catalog of nearly all human SNP and INDEL variation down to 1% worldwide frequency. Genotype files, generated via MPS or genotype array, can be compared between individuals to find regions that are co-inherited or identical-by-descent (IBD). These comparisons are the basis of the relative finder functions in many direct-to-consumer genetic testing products. A special case of relative-finding is self-identification. This is a trivial comparison of genotype files as self-comparisons will be identical across all sites, minus the error rate of the assay. For many forensic samples, however, the available DNA may not be suitable for PCR-based STR amplification, genotype array analysis, or MPS to the depth required for comprehensive, accurate genotype calling. In the case of PCR, one of the most common failure modes occurs when DNA is too fragmented for amplification. For these samples, it may be possible to directly observe the degree of DNA fragmentation from the decreased amplification efficiency of larger STR amplicons from a multiplex STR amplification. In the case of severely fragmented samples, where all DNA fragments are shorter than the shortest STR amplicon length, PCR simply fails with no product.

Next Generation Of Emergency System Based On Wireless Sensor Network

         Recent mass evacuation events, including the 2018 Camp Fire and 2023 Maui Fire, have demonstrated shortcomings in our communication abilities during natural disasters and emergencies. Individuals fleeing dangerous areas were unable to obtain fast or accurate information pertaining to open evacuation routes and faced traffic gridlocks, while nearby communities were unprepared for the emergent situation and influx of persons. Climate change is increasing the frequency, areas subject to, and risk-level associated with natural hazards, making effective communication channels that can operate when mobile network-based systems and electric distribution systems are compromised crucial.         To address this need UC Berkeley researchers have developed a mobile network-free communication system that can function during natural disasters and be adapted to most communication devices (mobile phones and laptops). The self-organized, mesh-based and low-power network is embedded into common infrastructure monitoring device nodes (e.g., pre-existing WSN, LoRa, and other LPWAN devices) for effective local communication. Local communication contains dedicated Emergency Messaging and “walkie-talkie” functions, while higher level connectivity through robust gateway architecture and data transmission units allows for real-time internet access, communication with nearby communities, and even global connectivity. The system can provide GPS-free position information using trilateration, which can help identify the location of nodes monitoring important environmental conditions or allowing users to navigate.

Compact Key Encoding of Data for Public Exposure Such As Cloud Storage

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Daily Move© - Infant Body Position Classification

Prof. John Franchak and his team have developed a prototype system that accurately classifies an infant's body position.

HyNTP: an Adaptive Hybrid Network Time Protocol for Clock Synchronization in Heterogeneous Distributed Systems

Since the advent of asynchronous packet-based networks in communication and information technology, the topic of clock synchronization has received significant attention due to the temporal requirements of packet-based networks for the exchange of information. In more recent years, as distributed packet-based networks have evolved in terms of size, complexity, and, above all, application scope, there has been a growing need for new clock synchronization schemes with tractable design conditions to meet the demands of these evolving networks. Distributed applications such as robotic swarms, automated manufacturing, and distributed optimization rely on precise time synchronization among distributed agents for their operation. For example, in the case of distributed control and estimation over networks, the uncertainties of packet-based network communication require timestamping of sensor and actuator messages in order to synchronize the information to the evolution of the dynamical system being controlled or estimated. Such a scenario is impossible without the existence of a common timescale among the non-collocated agents in the system. In fact, the lack of a shared timescale among the networked agents can result in performance degradation that can destabilize the system. Moreover, one cannot always assume that consensus on time is a given, especially when the network associated to the distributed system is subject to perturbations such as noise, delay, or jitter. Hence, it is essential that these networked systems utilize clock synchronization schemes that establish and maintain a common timescale for their algorithms. With the arrival of more centralized protocols came motivated leader-less, consensus-based approaches by leveraging the seminal results on networked consensus in (e.g., Cao et al. 2008). More recent approaches (Garone et al. 2015, Kikuya et al. 2017) employ average consensus to give asymptotic results on clock synchronization under asynchronous and asymmetric communication topology. Unfortunately, a high number of iterations of the algorithm is often required before the desired synchronization accuracy is achieved. Furthermore, the constraint on asymmetric communication precludes any results guaranteeing stability or robustness. Lastly, these approaches suffer from over-complexity in term of both computation and memory allocation. Moreover, both synchronous and asynchronous scenarios require a large number of iterations before synchronization is achieved. Finally, the algorithm subjects the clocks to significant non-smooth adjustments in clock rate and offset that may prove undesirable in certain application settings.

Methods To Dysfluent Speech Transcription And Detection

Dysfluent speech modeling requires time-accurate and silence-aware transcription at both the word-level and phonetic-level. However, current research in dysfluency modeling primarily focuses on either transcription or detection, and the performance of each aspect remains limited.To address this problem, UC Berkeley researchers have developed a new unconstrained dysfluency modeling (UDM) approach that addresses both transcription and detection in an automatic and hierarchical manner. Furthermore, a simulated dysfluent dataset called VCTK++ enhances the capabilities of UDM in phonetic transcription. The effectiveness and robustness of UDM in both transcription and detection tasks has been demonstrated experimentally.UDM eliminates the need for extensive manual annotation by providing a comprehensive solution.

Compact Key with Reusable Common Key for Encryption

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Extra-Compact Key with Reusable Common Key for Encryption

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

  • Go to Page: