Browse Category: Computer > Software

[Search within category]

Multi-Dimensional Computer Simulation Code For Proton Exchange Membrane (Pem) Electrolysis Cell (Ec) Advanced Design And Control

Polymer electrolyte membrane (PEM) electrolyzers have received increasing attention for renewable hydrogen production through water splitting. In order to develop such electrolyzers, it is necessary to understand and model the flow of liquids, gases, and ions through the PEM. An advancedmulti-dimensional multi-physics model is established for PEM electrolyzer to describe the two-phase flow, electron/proton transfer, mass transport, and water electrolysis kinetics.

Methods and Computational System for Genetic Identification and Relatedness Detection

Deoxyribonucleic acid- (DNA-) based identification in forensics is typically accomplished via genotyping allele length at a defined set of short tandem repeat (STR) loci via polymerase chain reaction (PCR). These PCR assays are robust, reliable, and inexpensive. Given the multiallelic nature of each of these loci, a small panel of STR markers can provide suitable discriminatory power for personal identification. Massively parallel sequencing (MPS) technologies and genotype array technologies invite new approaches for DNA-based identification. Application of these technologies has provided catalogs of global human genetic variation at single-nucleotide polymorphic (SNP) sites and short insertion-deletion (INDEL) sites. For example, from the 1000 Genomes Project, there is now a catalog of nearly all human SNP and INDEL variation down to 1% worldwide frequency. Genotype files, generated via MPS or genotype array, can be compared between individuals to find regions that are co-inherited or identical-by-descent (IBD). These comparisons are the basis of the relative finder functions in many direct-to-consumer genetic testing products. A special case of relative-finding is self-identification. This is a trivial comparison of genotype files as self-comparisons will be identical across all sites, minus the error rate of the assay. For many forensic samples, however, the available DNA may not be suitable for PCR-based STR amplification, genotype array analysis, or MPS to the depth required for comprehensive, accurate genotype calling. In the case of PCR, one of the most common failure modes occurs when DNA is too fragmented for amplification. For these samples, it may be possible to directly observe the degree of DNA fragmentation from the decreased amplification efficiency of larger STR amplicons from a multiplex STR amplification. In the case of severely fragmented samples, where all DNA fragments are shorter than the shortest STR amplicon length, PCR simply fails with no product.

Compact Key Encoding of Data for Public Exposure Such As Cloud Storage

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Software to Diagnose Sensory Issues in Fragile X Syndrome and Autism

Professor Anubhuti Goel and colleagues from the University of California, Riverside have developed a novel diagnostic tool and software program that provides a quick, objective measure of sensory issues for individuals with Autism spectrum disorders and Fragile X syndrome. This tool works by using a software application to administer a game. Based on the individual’s score at the end of the game, a diagnosis about sensory issues may be made. This technology is advantageous because it may provide an easily accessible, low cost, and safe diagnostic tool for Fragile X Syndrome and Autism that can be developed as a telehealth diagnostic tool.      

Daily Move© - Infant Body Position Classification

Prof. John Franchak and his team have developed a prototype system that accurately classifies an infant's body position.

Hyntp: an Adaptive Hybrid Network Time Protocol for Clock Synchronization in Heterogeneous Distributed Systems

Since the advent of asynchronous packet-based networks in communication and information technology, the topic of clock synchronization has received significant attention due to the temporal requirements of packet-based networks for the exchange of information. In more recent years, as distributed packet-based networks have evolved in terms of size, complexity, and, above all, application scope, there has been a growing need for new clock synchronization schemes with tractable design conditions to meet the demands of these evolving networks. Distributed applications such as robotic swarms, automated manufacturing, and distributed optimization rely on precise time synchronization among distributed agents for their operation. For example, in the case of distributed control and estimation over networks, the uncertainties of packet-based network communication require timestamping of sensor and actuator messages in order to synchronize the information to the evolution of the dynamical system being controlled or estimated. Such a scenario is impossible without the existence of a common timescale among the non-collocated agents in the system. In fact, the lack of a shared timescale among the networked agents can result in performance degradation that can destabilize the system. Moreover, one cannot always assume that consensus on time is a given, especially when the network associated to the distributed system is subject to perturbations such as noise, delay, or jitter. Hence, it is essential that these networked systems utilize clock synchronization schemes that establish and maintain a common timescale for their algorithms. With the arrival of more centralized protocols came motivated leader-less, consensus-based approaches by leveraging the seminal results on networked consensus in (e.g., Cao et al. 2008). More recent approaches (Garone et al. 2015, Kikuya et al. 2017) employ average consensus to give asymptotic results on clock synchronization under asynchronous and asymmetric communication topology. Unfortunately, a high number of iterations of the algorithm is often required before the desired synchronization accuracy is achieved. Furthermore, the constraint on asymmetric communication precludes any results guaranteeing stability or robustness. Lastly, these approaches suffer from over-complexity in term of both computation and memory allocation. Moreover, both synchronous and asynchronous scenarios require a large number of iterations before synchronization is achieved. Finally, the algorithm subjects the clocks to significant non-smooth adjustments in clock rate and offset that may prove undesirable in certain application settings.

Methods To Dysfluent Speech Transcription And Detection

Dysfluent speech modeling requires time-accurate and silence-aware transcription at both the word-level and phonetic-level. However, current research in dysfluency modeling primarily focuses on either transcription or detection, and the performance of each aspect remains limited.To address this problem, UC Berkeley researchers have developed a new unconstrained dysfluency modeling (UDM) approach that addresses both transcription and detection in an automatic and hierarchical manner. Furthermore, a simulated dysfluent dataset called VCTK++ enhances the capabilities of UDM in phonetic transcription. The effectiveness and robustness of UDM in both transcription and detection tasks has been demonstrated experimentally.UDM eliminates the need for extensive manual annotation by providing a comprehensive solution.

Compact Key with Reusable Common Key for Encryption

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Extra-Compact Key with Reusable Common Key for Encryption

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Cross-Layer Device Fingerprinting System and Methods

Networks of connectivity-enabled devices, known as internet of things or IoT, involve interrelated devices that connect and exchange data with other IoT devices and the cloud. As the number of IoT devices and their applications continue to significantly increase, managing and administering edge and access networks have become increasingly more challenging. Currently, there are approximately 31 billion ‘‘things’’ connected to the internet, with a projected rise to 75 billion devices by 2025. Because of IoT interconnectivity and ubiquitous device use, assessing the risks, designing/specifying what’s reasonable, and implementing controls can be overwhelming to conventional frameworks. Any approach to better IoT network security, for example by improved detection and denial or restriction of access by unauthorized devices, must consider its impact on performance such as speed, power use, interoperability, and scalability. The IoT network’s physical and MAC layers are not impenetrable and have many known threats, especially identity-based attacks such as MAC spoofing events. Common network infrastructure uses WPA2 or IEEE 802.11i to help protect users and their devices and connected infrastructure. However, the risk of MAC spoofing remains, as bad actors leverage public tools on 802.11 commodity hardware, or intercept sensitive data packets at scale, to access users physical layer data, and can lead to wider tampering and manipulation of hardware-level parameters.

Telehealth-Mediated Physical Rehabilitation Systems and Methods

The use of telemedicine/telehealth increased substantially during the COVID-19 pandemic, leading to its accelerated development, utilization and acceptability. Telehealth momentum with patients, providers, and other stakeholders will likely continue, which will further promote its safe and evidence-based use. Improved healthcare by telehealth has also extended to musculoskeletal care. In a recent study looking at implementation of telehealth physical therapy in response to COVID-19, almost 95% of participants felt satisfied with the outcome they received from the telehealth physical therapy (PT) services, and over 90% expressed willingness to attend another telehealth session. While telehealth has enhanced accessibility by virtual patient visits, certain physical rehabilitation largely depends on physical facility and tools for evaluation and therapy. For example, limb kinematics in PT with respect to the shoulder joint is difficult to evaluate remotely, because the structure of the shoulder allows for tri-planar movement that cannot be estimated by simple single plane joint models. With the emergence of gaming technologies, such as videogames and virtual reality (VR), comes new potential tools for virtual-based physical rehabilitation protocols. Some research has shown digital game environments, and associated peripherals like immersive VR (iVR) headsets, can provide a powerful medium and motivator for physical exercise. And while low-cost motion tracking systems exist to match user movement in the real world to that in the virtual environment, challenges remain in bridging traditional PT tooling and telehealth-friendly physical rehabilitation.

Software Of Predictive Scheduling For Crop-Transport Robots Acting As Harvest-Aids During Manual Harvesting

Researchers at the University of California, Davis have developed an automated harvesting system using predictive scheduling for crop-transport robots, reducing manual labor, and increasing harvesting efficiency.

MR-Based Electrical Property Reconstruction Using Physics-Informed Neural Networks

Electrical properties (EP), such as permittivity and conductivity, dictate the interactions between electromagnetic waves and biological tissue. EP are biomarkers for pathology characterization, such as cancer. Imaging of EP helps monitor the health of the tissue and can provide important information in therapeutic procedures. Magnetic resonance (MR)-based electrical properties tomography (MR-EPT) uses MR measurements, such as the magnetic transmit field B1+, to reconstruct EP. These reconstructions rely on the calculations of spatial derivatives of the measured B1+. However, the numerical approximation of derivatives leads to noise amplifications introducing errors and artifacts in the reconstructions. Recently, a supervised learning-based method (DL-EPT) has been introduced to reconstruct robust EP maps from noisy measurements. Still, the pattern-matching nature of this method does not allow it to generalize for new samples since the network’s training is done on a limited number of simulated data pairs, which makes it unrealistic in clinical applications. Thus, there is a need for a robust and realistic method for EP map construction.

Universal Patient Monitoring

Sensor-based patient monitoring is a promising approach to assess risk, which can then be used by healthcare clinics to focus efforts on the highest-risk patients without having to spend the time manually assessing risk. For example, pressure ulcers/injuries are localized damage to the skin and/or underlying tissue that usually occur over a bony prominence and are most common to develop in individuals who have low-mobility, such as those who are bedridden or confined to a wheelchair and consequently are attributed to some combination of pressure, friction, shear force, temperature, humidity, and restriction of blood flow and are more prevalent in patients with chronic health problems. Sensor-based patient monitoring can be tuned to the individual based on the relative sensor readings. However, existing sensor-based monitoring techniques, such as pressure monitoring, are one-off solutions that are not supported by a comprehensive system which integrates sensing, data collection, storage, data analysis, and visualization. While traditional monitoring solutions are suitable for its intended purpose, these approaches require substantial re-programming as the suites of monitoring sensors change over time.

Dynamically Tuning IEEE 802.11 Contention Window Using Machine Learning

The exchange of information among nodes in a communications network is based upon the transmission of discrete packets of data from a transmitter to a receiver over a carrier according to one or more of many well-known, new or still developing protocols. In this context, a protocol consists of a set of rules defining how the nodes interact with each other based on information sent over the communication links. Often, multiple nodes will transmit a packet at the same time and a collision occurs. During a collision, the packets are disrupted and become unintelligible to the other devices listening to the carrier activity. In addition to packet loss, network performance is greatly impacted. The delay introduced by the need to retransmit the packets cascades throughout the network to the other devices waiting to transmit over the carrier. Therefore, packet collision has a multiplicative effect that is detrimental to communications networks. As a result, multiple international protocols have been developed to address packet collision, including collision detection and avoidance. Within the context of wired Ethernet networks, the issue of packet collision has been largely addressed by network protocols that try to detect a packet collision and then wait until the carrier is clear to retransmit. Emphasis is placed in collision detection, i.e., a transmitting node can determine whether a collision has occurred by sensing the carrier. At the same time, the nature of wireless networks prevents wireless nodes from being able to detect a collision. This is the case, in part, because in wireless networks the nodes can send and receive but cannot sense packets traversing the carrier after the transmission has started. Another problem arises when two transmitting nodes are out of range of each other, but the receiving node is within range of both. In this case, a transmitting node cannot sense another transmitting node that is out of communications range. IEEE 802.11 protocols are the basis for wireless network products using the Wi-Fi brand and are the world's most widely used wireless computer networking standards. With IEEE 802.11 packet collision features come deficiencies, like fairness. 802.11’s approach to certain parameters after each successful transmission may cause the node who succeeds in transmitting to dominate the channel for an arbitrarily long period of time. As a result, other nodes may suffer from severe short-term unfairness. Also, the current state of the network (e.g., load) is something that also should be factored. In general, there is a need for techniques to recognize network patterns and determine certain parameters that are responsive to those network patterns.

Method For Producing Renderings From 3D Models Using Generative Machine Learning

Existing approaches to visualizing 3D models are capable of producing highly detailed representations of 3D scenes with precision and significant compositional control, but they also require a significant amount of time and expertise by the user to create and configure. Recent developments in generative machine learning (GML) have brought about systems that are capable of quickly producing convincing synthetic images of objects, people, landscapes, and environments, without the need for a 3D model, but these are difficult to precisely control and compose. Therefore, current methods cannot directly relate to detailed 3D models with the fidelity required for many applications, including architecture, product/industrial design, and experience design. To address this opportunity, UC Berkeley researchers have developed a new, GML-integrated 3D modeling and visualization workflow. The workflow streamlines the visualization process by eliminating arduous and time-consuming aspects while maintaining important points of user control. The invention is tailored for the production of “semantically-guided” visualizations of 3D models by coupling the detailed compositional control offered by 3D models with the unique facility of defining visual properties of geometry using natural language. The invention allows designers to more rapidly, efficiently, and intuitively iterate on designs.

Yarn-based algorithm for generating realistic cloth renderings

Researchers at UC Irvine have developed an efficient algorithm for generating computer-rendered textiles with fiber-level details and easy editability. This technology greatly enhances the richness of virtual fabric models and has the potential to impact various industries such as online retail, textile design, videogames, and animated movies.

Biological and Hybrid Neural Networks Communication

During initial stages of development, the human brain self assembles from a vast network of billions of neurons into a system capable of sophisticated cognitive behaviors. The human brain maintains these capabilities over a lifetime of homeostasis, and neuroscience helps us explore the brain’s capabilities. The pace of progress in neuroscience depends on experimental toolkits available to researchers. New tools are required to explore new forms of experiments and to achieve better statistical certainty.Significant challenges remain in modern neuroscience in terms of unifying processes at the macroscopic and microscopic scale. Recently, brain organoids, three-dimensional neural tissue structures generated from human stem cells, are being used to model neural development and connectivity. Organoids are more realistic than two-dimensional cultures, recapitulating the brain, which is inherently three-dimensional. While progress has been made studying large-scale brain patterns or behaviors, as well as understanding the brain at a cellular level, it’s still unclear how smaller neural interactions (e.g., on the order of 10,000 cells) create meaningful cognition. Furthermore, systems for interrogation, observation, and data acquisition for such in vitro cultures, in addition to streaming data online to link with these analysis infrastructures, remains a challenge.

Software Tool for Predicting Sequences in a Genome that are Subject to Restriction or Other Surveillance Mechanisms

Many genomes encode Restriction-Modification systems (RMs) that act to protect the host cell from invading DNA by cutting at specific sites (frequently short 4-6 base reverse complement palindromes). RMs also protect host DNA from unfavorably being cut by modifying sites within the host DNA that could be targets by the host’s own surveillance enzymes. It is also not unusual to find that these enzymes are adjacent to each other in the host genome. Traditional approaches to understanding these sites involve finding a methylase that is typically adjacent to a restriction enzyme, and then extracting DNA, expressing protein and then testing DNA sequence for evidence of cutting. In certain laboratory research (e.g., programs that involve transforming DNA/RNA) it may be desirable to more comprehensively understand the sequences being surveilled by the host. Moreover, it may be desirable in certain laboratory research to know/predict which surveillance enzymes are present in a genome in order to affect cell transformation efficiency through evasion of those sequences.

Cloud-Based Cardiovascular Wireless Monitoring Device

Cardiovascular disease is the leading cause of death both worldwide and in the United States, with associated costs in the U.S. reaching approximately $229 billion, each, in 2017 and 2018. Early detection, which can drastically reduce both rates of death and treatment costs, requires access to facilities and highly-trained physicians that can be difficult to access in rural areas and developing countries—despite their prevalence of cardiovascular disease. Computer-based models that use, e.g., PCG (phonocardiogram), EKG (electrocardiogram), or other cardiac data, are a promising route to bridge the gap in standard-of-care for these underserved areas. However, current algorithms are unable to account for demographic features, such as race, sex, or other characteristics, which are known to affect both the structure of the heart and presentation of heart disease. To address this problem, UC Berkeley researchers have developed a new, cloud-based system for collecting a patient's continuous cardiovascular data, monitoring for and detecting disease, and keeping a doctor informed about the cardiac health of the patient. The system sends an alarm when disease or heart attack are detected. To generate the most accurate diagnoses by taking into account demographic information, the system includes private and ethical dataset collection and model-training techniques.

Method To Inverse Design Mechanical Behaviors Using Artificial Intelligence

Metamaterials are constructed from regular patterns of simpler constituents known as unit cells. These engineered metamaterials can exhibit exotic mechanical properties not found in naturally occurring materials, and accordingly they have the potential for use in a variety of applications from running shoe soles to automobile crumple zones to airplane wings. Practical design using metamaterials requires the specification of the desired mechanical properties based on understanding the precise unit cell structure and repeating pattern. Traditional design approaches, however, are often unable to take advantage of the full range of possible stress-strain relationships, as they are hampered by significant nonlinear behavior, process-dependent manufacturing errors, and the interplay between multiple competing design objectives. To solve these problems, researchers at UC Berkeley have developed a machine learning algorithm in which designers input a desired stress-strain curve that encodes the mechanical properties of a material. Within seconds, the algorithm outputs the digital design of a metamaterial that, once printed, fully encapsulates the desired properties from the inputted stress-strain curve. This algorithm produces results with a fidelity to the desired curve in excess of 90%, and can reproduce a variety of complex phenomena completely inaccessible to existing methods.

Methods and Systems for Large Group Chat Conversations

In today’s modern computing environment, the growth of internet speeds and web-friendly devices have enabled a newer generation of telecommunication technology and practice. Electronic chat (messaging) applications have become a common tool for both synchronous and asynchronous communication because of their ease of use and flexibility. Electronic group chat has also become a common tool to facilitate group discussion, including teaching, mentoring, and decision-making. Group chat is a feature in many popular business and social apps that support audio/video web-conferencing, including Zoom, Google, Microsoft, and Facebook. Typical web-conference software may include a window containing sub-windows for a video, presentation, and/or group chat, etc. However, group chat today is limited in its ability to engage all users in a discussion, especially as the group size grows. In a large group chat, if users are all engaged, the resulting firehose of messages makes it impossible to have a coherent conversation. For conveners and participants alike, the results range from mild distraction to unstructured noise, leading people to disengage with the conversation and/or miss important messages, which limits the usefulness of any platform’s group chat feature.

  • Go to Page: