Browse Category: Computer > Security

[Search within category]

Technique for Safe and Trusted AI

Researchers at the University of California Davis have developed a technology that enables the provable editing of DNNs (deep neural networks) to meet specified safety criteria without altering their architecture.

Photonic Physically Unclonable Function for True Random Number Generation and Biometric ID for Hardware Security Applications

Researchers at the University of California, Davis have developed a technology that introduces a novel approach to hardware security using photonic physically unclonable functions for true random number generation and biometric ID.

Deployable Anonymity System: Introducing Sparta

Metadata is used to summarize basic information about data that can make tracking and working with specific data easier. Today’s communication systems, like WhatsApp, iMessage, and Signal, use end-to-end encryption to protect message contents. Such communication systems do not hide metadata, which is the data providing information about one or more aspects of such contents, like messages. Such metadata includes information about who communicates with whom, when, and how much, and is generally visible to systems and network observers. As a result, cyber risk associated with metadata leakage and traffic analysis remains a significant attack vector in such modern communication systems. Previous attempts to address this risk have been generally seen as not secure or prohibitively expensive, for example, by imposing inflexible bandwidth restrictions and cumbersome synchronous schedules globally, which cripples performance. Moreover, prior approaches relied on distributed trust for security, which is largely incompatible with conventional organizations hosting or using such apps.

Compact Key Encoding of Data for Public Exposure Such As Cloud Storage

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Compact Key with Reusable Common Key for Encryption

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Extra-Compact Key with Reusable Common Key for Encryption

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Cross-Layer Device Fingerprinting System and Methods

Networks of connectivity-enabled devices, known as internet of things or IoT, involve interrelated devices that connect and exchange data with other IoT devices and the cloud. As the number of IoT devices and their applications continue to significantly increase, managing and administering edge and access networks have become increasingly more challenging. Currently, there are approximately 31 billion ‘‘things’’ connected to the internet, with a projected rise to 75 billion devices by 2025. Because of IoT interconnectivity and ubiquitous device use, assessing the risks, designing/specifying what’s reasonable, and implementing controls can be overwhelming to conventional frameworks. Any approach to better IoT network security, for example by improved detection and denial or restriction of access by unauthorized devices, must consider its impact on performance such as speed, power use, interoperability, and scalability. The IoT network’s physical and MAC layers are not impenetrable and have many known threats, especially identity-based attacks such as MAC spoofing events. Common network infrastructure uses WPA2 or IEEE 802.11i to help protect users and their devices and connected infrastructure. However, the risk of MAC spoofing remains, as bad actors leverage public tools on 802.11 commodity hardware, or intercept sensitive data packets at scale, to access users physical layer data, and can lead to wider tampering and manipulation of hardware-level parameters.

Techniques for Encryption based on Perfect Secrecy for Bounded Storage

A major aim of the field of cryptography is to design cryptosystems that are provably secure and practical. Factors such as integrity, confidentiality and authentication are important. Symmetric-key methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. Asymmetric-type frameworks use pairs of keys consisting of a public and private key, and these models depends heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches. Symmetric-Asymmetric hybrid models have attempted to blend the speed and convenience of the public asymmetric encryption schemes with the effectiveness of a private symmetric encryption schemes. Examples of hybrids include GNU Privacy Guard, Advanced Encryption Standard-RSA, and Elliptical Curve Cryptography-RSA. In the modern computing era with quantum architecture and access to network and cloud resources on the rise, the integrity and confidentiality of such modern cryptographic models will increasingly be under pressure.

Blockchain Protocols for Advancements in Throughput, Fault-Tolerance, and Scalability

Researchers at the University of California, Davis have developed several blockchain paradigms that provide new approaches and expand on existing protocols to improve performance in large-scale blockchain implementations.

In-Sensor Hardware-Software Co-Design Methodology of the Hall Effect Sensors to Prevent and Contain the EMI Spoofing Attacks

Researchers at UCI have developed a novel co-design methodology of hardware-software architecture used for protecting Hall sensors found in autonomous vehicles, smart grids, industrial plants, etc…, against spoofing attacks.There are currently no comprehensive measures in place to protecting Hall sensors.

(SD2020-340) Algorithm-Hardware Co-Optimization For Efficient High-Dimensional Computing

With the emergence of the Internet of Things (IoT), many applications run machine learning algorithms to perform cognitive tasks. The learning algorithms have been shown effectiveness for many tasks, e.g., object tracking, speech recognition, image classification, etc. However, since sensory and embedded devices are generating massive data streams, it poses huge technical challenges due to limited device resources. For example, although Deep Neural Networks (DNNs) such as AlexNet and GoogleNet have provided high classification accuracy for complex image classification tasks, their high computational complexity and memory requirement hinder usability to a broad variety of real-life (embedded) applications where the device resources and power budget is limited. Furthermore, in IoT systems, sending all the data to the powerful computing environment, e.g., cloud, cannot guarantee scalability and real-time response. It is also often undesirable due to privacy and security concerns. Thus, we need alternative computing methods that can run the large amount of data at least partly on the less-powerful IoT devices. Brain-inspired Hyperdimensional (HD) computing has been proposed as the alternative computing method that processes the cognitive tasks in a more light-weight way.  The HD computing is developed based on the fact that brains compute with patterns of neural activity which are not readily associated with numerical numbers. Recent research instead have utilized high dimension vectors (e.g., more than a thousand dimension), called hypervectors, to represent the neural activities, and showed successful progress for many cognitive tasks such as activity recognition, object recognition, language recognition, and bio-signal classification. 

(SD2019-340) Collaborative High-Dimensional Computing

Internet of Things ( IoT ) applications often analyze collected data using machine learning algorithms. As the amount of the data keeps increasing, many applications send the data to powerful systems, e.g., data centers, to run the learning algorithms . On the one hand, sending the original data is not desirable due to privacy and security concerns.On the other hand, many machine learning models may require unencrypted ( plaintext ) data, e.g., original images , to train models and perform inference . When offloading theses computation tasks, sensitive information may be exposed to the untrustworthy cloud system which is susceptible to internal and external attacks . In many IoT systems , the learning procedure should be performed with the data that is held by a large number of user devices at the edge of Internet . These users may be unwilling to share the original data with the cloud and other users if security concerns cannot be addressed.

DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM

UCLA researchers in the Department of Mathematicshave developed a method to maintain data privacy.

Techniques for Creation and Insertion of Test Points for Malicious Circuitry Detection

Researchers led by Dr. Potkonjak from the UCLA Department of Computer Science have developed a technique to detect hardware Trojans in integrated circuits.

System For Eliminating Clickbaiters On Visual-Centric Social Media

Researchers from the Department of Communication at UCLA have developed a system for identifying and eliminating clickbait from social media.

Privacy Preserving Stream Analytics

UCLA researchers in the Department of Computer Science have developed a new privacy preserving mechanism for stream analytics.

DeepSign: Digital Rights Management in Deep Learning Models

As society becomes more and more complicated, we have also developed ways to analyze and solve some of these complexities via the convergence of the fields of artificial intelligence, cognitive science and neuroscience. What has emerged is the development of machine learning, which allows computers to improve automatically through experience. Thus, developers working on artificial intelligence (AI) systems have come forth to align AI with machine-learning algorithms to cover a wide variety of machine-learning problems. The most advanced of these are called supervised learning methods which form their predictions via learned mapping, which can include decision trees, logistic regression, support vector machines, neural networks and Bayesian classifiers. More recently, deep networks have emerged as multilayer networks involved in a number of applications, such as computer vision and speech recognition. A practical concern in the rush to adopt AI as a service is the capability to perform model protection: AI models are usually trained by allocating significant computational resources to process massive amounts of training data. The built models are therefore considered as the owner’s intellectual property (IP) and need to be protected to preserve the competitive advantage.

Librando: Transparent Code Randomization For Just-In-Time Compilers

Just-in-time compilation is a method of executing computer code which, while boasting superior execution times, is prone to security exploits. UCI researchers have developed librando, a software framework for increasing security for just-in-time compilers, ensuring that generated program code is not predictable to an attacker.

Value-Based Information Flow Tracking in Software Packages

A collaboration between UCLA and Rutgers have developed a novel information flow tracking technique to detect potential data leaks in mobile devices.

  • Go to Page: