Browse Category: Communications > Networking

[Search within category]

Interference Management for Concurrent Transmission in Downlink Wireless Communications

It is well known that the communication capacity of wireless networks is limited by interference. Depending on the strength of the interference, there are three conventional approaches to this problem. If the interference is very strong, then the receiver can decode the interfering signal and subtract from the desired signal using successive interference cancelation. If the interference signal is very weak compared to the desired signal, it can be treated as noise. The third and most common possibility is when the interference is comparable with the desired signal. In this case the interference can be avoided by orthogonalizing it with the desired signal using techniques such as time division multiple access (TDMA) or frequency division multiple access (FDMA). In addition to interference, wireless networks also experience channel fading. Conventional approaches to wireless networking attempt to combat fading. Depending on the coherence time of the fading, various approaches have been used. For example, fast fading may be mitigated by the use of diversity techniques, interleaving, and error-correcting codes. Certain diversity techniques, such as the use of multiple antennas, has been shown to help combat fading as well as increase multiplexing gain and system capacity. Multiuser diversity scheme is a technique to increase the capacity of wireless networks using multiple antennas at the base station. In this approach the base station selects a mobile device that has the best channel condition, maximizing the signal-to-noise ratio (SNR). According to some implementations of this approach, K random beams are constructed and information is transmitted to the users with the highest signal-to-noise plus interference ratio (SINR). Searching for the best SINR in the network, however, requires feedback from the mobile devices that scales linearly with the number of users. These implementations also use beamforming, which is complex to implement. In addition, the cooperation requirement is substantial.

Compact Key Encoding of Data for Public Exposure Such As Cloud Storage

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Hyntp: an Adaptive Hybrid Network Time Protocol for Clock Synchronization in Heterogeneous Distributed Systems

Since the advent of asynchronous packet-based networks in communication and information technology, the topic of clock synchronization has received significant attention due to the temporal requirements of packet-based networks for the exchange of information. In more recent years, as distributed packet-based networks have evolved in terms of size, complexity, and, above all, application scope, there has been a growing need for new clock synchronization schemes with tractable design conditions to meet the demands of these evolving networks. Distributed applications such as robotic swarms, automated manufacturing, and distributed optimization rely on precise time synchronization among distributed agents for their operation. For example, in the case of distributed control and estimation over networks, the uncertainties of packet-based network communication require timestamping of sensor and actuator messages in order to synchronize the information to the evolution of the dynamical system being controlled or estimated. Such a scenario is impossible without the existence of a common timescale among the non-collocated agents in the system. In fact, the lack of a shared timescale among the networked agents can result in performance degradation that can destabilize the system. Moreover, one cannot always assume that consensus on time is a given, especially when the network associated to the distributed system is subject to perturbations such as noise, delay, or jitter. Hence, it is essential that these networked systems utilize clock synchronization schemes that establish and maintain a common timescale for their algorithms. With the arrival of more centralized protocols came motivated leader-less, consensus-based approaches by leveraging the seminal results on networked consensus in (e.g., Cao et al. 2008). More recent approaches (Garone et al. 2015, Kikuya et al. 2017) employ average consensus to give asymptotic results on clock synchronization under asynchronous and asymmetric communication topology. Unfortunately, a high number of iterations of the algorithm is often required before the desired synchronization accuracy is achieved. Furthermore, the constraint on asymmetric communication precludes any results guaranteeing stability or robustness. Lastly, these approaches suffer from over-complexity in term of both computation and memory allocation. Moreover, both synchronous and asynchronous scenarios require a large number of iterations before synchronization is achieved. Finally, the algorithm subjects the clocks to significant non-smooth adjustments in clock rate and offset that may prove undesirable in certain application settings.

Compact Key with Reusable Common Key for Encryption

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Extra-Compact Key with Reusable Common Key for Encryption

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Cross-Layer Device Fingerprinting System and Methods

Networks of connectivity-enabled devices, known as internet of things or IoT, involve interrelated devices that connect and exchange data with other IoT devices and the cloud. As the number of IoT devices and their applications continue to significantly increase, managing and administering edge and access networks have become increasingly more challenging. Currently, there are approximately 31 billion ‘‘things’’ connected to the internet, with a projected rise to 75 billion devices by 2025. Because of IoT interconnectivity and ubiquitous device use, assessing the risks, designing/specifying what’s reasonable, and implementing controls can be overwhelming to conventional frameworks. Any approach to better IoT network security, for example by improved detection and denial or restriction of access by unauthorized devices, must consider its impact on performance such as speed, power use, interoperability, and scalability. The IoT network’s physical and MAC layers are not impenetrable and have many known threats, especially identity-based attacks such as MAC spoofing events. Common network infrastructure uses WPA2 or IEEE 802.11i to help protect users and their devices and connected infrastructure. However, the risk of MAC spoofing remains, as bad actors leverage public tools on 802.11 commodity hardware, or intercept sensitive data packets at scale, to access users physical layer data, and can lead to wider tampering and manipulation of hardware-level parameters.

Dynamically Tuning IEEE 802.11 Contention Window Using Machine Learning

The exchange of information among nodes in a communications network is based upon the transmission of discrete packets of data from a transmitter to a receiver over a carrier according to one or more of many well-known, new or still developing protocols. In this context, a protocol consists of a set of rules defining how the nodes interact with each other based on information sent over the communication links. Often, multiple nodes will transmit a packet at the same time and a collision occurs. During a collision, the packets are disrupted and become unintelligible to the other devices listening to the carrier activity. In addition to packet loss, network performance is greatly impacted. The delay introduced by the need to retransmit the packets cascades throughout the network to the other devices waiting to transmit over the carrier. Therefore, packet collision has a multiplicative effect that is detrimental to communications networks. As a result, multiple international protocols have been developed to address packet collision, including collision detection and avoidance. Within the context of wired Ethernet networks, the issue of packet collision has been largely addressed by network protocols that try to detect a packet collision and then wait until the carrier is clear to retransmit. Emphasis is placed in collision detection, i.e., a transmitting node can determine whether a collision has occurred by sensing the carrier. At the same time, the nature of wireless networks prevents wireless nodes from being able to detect a collision. This is the case, in part, because in wireless networks the nodes can send and receive but cannot sense packets traversing the carrier after the transmission has started. Another problem arises when two transmitting nodes are out of range of each other, but the receiving node is within range of both. In this case, a transmitting node cannot sense another transmitting node that is out of communications range. IEEE 802.11 protocols are the basis for wireless network products using the Wi-Fi brand and are the world's most widely used wireless computer networking standards. With IEEE 802.11 packet collision features come deficiencies, like fairness. 802.11’s approach to certain parameters after each successful transmission may cause the node who succeeds in transmitting to dominate the channel for an arbitrarily long period of time. As a result, other nodes may suffer from severe short-term unfairness. Also, the current state of the network (e.g., load) is something that also should be factored. In general, there is a need for techniques to recognize network patterns and determine certain parameters that are responsive to those network patterns.

Integrated Microlens Coupler For Photonic Integrated Circuits

Silicon photonics is increasingly used in an array of communications and computing applications. In many applications, photonic chips must be coupled to optical fibers, which remains challenging due to the size mismatch between the on-chip photonics and the fiber itself. Existing approaches suffer from low alignment tolerance, sensitivity to fabrication variations, and complex processing, all of which hinder mass manufacture.To address these problems, researchers at UC Berkeley have developed a coupling mechanism between a silicon integrated photonic circuit and an optical fiber which uses a microlens to direct and collimate light into the fiber. Researchers have demonstrated that this device can achieve low coupling loss at large alignment tolerances, with an efficient and scalable manufacturing process analogous to existing manufacture of electronic integrated circuits. In particular, because the beam is directed above the silicon chip, this method obviates dry etching or polishing of the edge of the IC and allows the silicon photonics to be produced by dicing in much the same way as present electronic integrated circuits.

Collision Avoidance in Multi-hop Wireless Networks

In most wireless ad-hoc multi-hop networks, a node competes for access to the same wireless communication channel, often resulting in collisions (interference) and ineffective carrier sensing. These issues have been targeted through the medium access control (MAC) interconnection layer by a variety of channel access schemes, towards improving how the nodes share the wireless channel and achieve a high quality of service. For example, there are contention-based MAC schemes, like Carrier-Sense Multiple Access (CSMA) and Additive Links On-Line Hawaii Area (ALOHA), and contention-free MAC schemes, like time division multiplexing access (TDMA). However, the former is a poor performer in hidden- and exposed-terminal environments, and the latter, where the node system is time-synchronized and the time frame is divided and multiple time-slots are allocated to the nodes, has limited data rates (bandwidth) and undesirable latency. Over the years, there have been many other MAC schemes that address interference and conflict, as well as improving criteria like throughput, fairness, latency, energy, and overhead. These modern protocols implement more sophisticated distributed transmission queues consisting of a sequence of transmission turns that grows and shrinks on demand. However, challenges remain in these more recent MAC protocols, such as long delays for allowing nodes to join the network, and/or the use of transmission frames with complex structures to allocate time slot portions to signaling packets for elections.

Techniques for Encryption based on Perfect Secrecy for Bounded Storage

A major aim of the field of cryptography is to design cryptosystems that are provably secure and practical. Factors such as integrity, confidentiality and authentication are important. Symmetric-key methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. Asymmetric-type frameworks use pairs of keys consisting of a public and private key, and these models depends heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches. Symmetric-Asymmetric hybrid models have attempted to blend the speed and convenience of the public asymmetric encryption schemes with the effectiveness of a private symmetric encryption schemes. Examples of hybrids include GNU Privacy Guard, Advanced Encryption Standard-RSA, and Elliptical Curve Cryptography-RSA. In the modern computing era with quantum architecture and access to network and cloud resources on the rise, the integrity and confidentiality of such modern cryptographic models will increasingly be under pressure.

Flippo The Robo-Shoe-Fly: A Foot Dwelling Social Wearable Companion

Social interactions in school and office settings traditionally involve few coordinated physical interactions, and most group engagement centers on sharing electronic screens. Wearable robot companions are a promising new direction for encouraging coordinated physical movement and social interaction in group settings. A UC Santa Cruz researcher has developed a wearable social companion that encourages users to interact via physical movement.

A Novel Iot Protocol Architecture; Efficiency Through Data And Functionality Sharing Across Layers

The Internet’s TCP/IP protocol architecture is a layered system design. As such, the functions performed by the TCP/IP protocol suite are implemented at different protocol layers, where each layer provides a specific set of services to the layer above through a well-defined interface. Using this interface, data being received or sent is passed up or down the stack on its way through the network.However, layered design approaches can increase overhead, as each layer incurs additional communication (e.g., additional header field) and processing costs. Furthermore, limiting the flow between layers to data plane information restricts the sharing of control information across layers and may lead to functions being duplicated at different layers. 

Noise Reduction In High Frequency Amplifiers Using Transmission Lines To Provide Feedback

Low noise amplifiers are ubiquitous in wireless data network receivers and radios. Themaximum transmission distance is limited by the receiver noise which is mostly determined by the noise figure of the first amplifier stage, the LNA. Reduction of LNA noise is thus always desirable in that it can increase transmission range or reduce power consumption resulting in higher performance or reduced system cost. This approach lowers the noise of the LNA relative to the other available methods.

Carrier Sense Multiple Access With Collision Avoidance And Pilots (CSMA/CAP)

In most wireless ad-hoc multi-hop networks, a node competes for access to shared wireless medium, often resulting in collisions (interference). A node is commonly equipped with a transceiver that possesses mounted half-duplex omnidirectional antenna. Transmission degradation can occur when terminals are hidden from each other by physical structure, such as buildings. Moreover, since half-duplex nodes cannot receive while transmitting, not all packets sent by different terminals are detected by one another. In fact, no channel-access protocol based on the traditional handshake over a single channel can guarantee collision-free transmissions. Problems can arise in multi-hop wireless networks when hidden terminals, exposed transmitters, or exposed receivers are present.

Magneto-Optic Modulator

Brief description not available

Phased-Locked Loop Coupled Array for Phased Array Applications

Researchers at the University of California, Davis have developed a phased-locked loop coupled array system capable of generating phase shifts in phased array antenna systems - while minimizing signal losses.

Systems and Methods for Sound-Enhanced Meeting Platforms

Computer-based, internet-connected, audio/video meeting platforms have become pervasive worldwide, especially since the 2020 emergence of the COVID-19 pandemic lockdown. These meeting platforms include Cisco Webex, Google Meet, GoTo, Microsoft Teams, and Zoom. However, those popular platforms are optimized for meetings in which all the participants are attending the meeting online, individually. Accordingly, those platforms have shortcomings when used for hybrid meetings in which some participants are attending together in-person and others attending online. Also, the existing platforms are problematic for large meetings in big rooms (e.g. classrooms) in which most or all of the participants are in-person. To address those suboptimal meet platform situations, researchers at UC Berkeley conceived systems, methods, algorithms and other software for a meeting platform that's optimized for hybrid meetings and large in-person meetings. The Berkeley meeting platform offers a user experience that's familiar to users of the conventional meeting platforms. Also, the Berkeley platform doesn't require any specialized participant hardware or specialized physical room infrastructure (beyond standard internet connectivity).

Multi-Agent Navigation And Communication Systems

The field of autonomous transportation is rapidly evolving to operate in diverse settings and conditions. However, as the number of autonomous vehicles on the road increases the complexity of the computations needed to safely operate all of the autonomous vehicles grows rapidly. across multiple vehicles, this creates a very large volume of computations that must be performed very quickly (e.g., in real or near-real time).   Thus, treating each autonomous vehicle as an independent entity may result in inefficient use of computing resources, as many redundant data collections and computations may be performed (e.g., two vehicles in close proximity may be performing computations related to the same detected object). To address this issue, researches at UC Berkeley proposed algorithms for the management and exchange of shared information across nearby and distant vehicles.According to the proposed arrangement, autonomous vehicles may share data collected by their respective sensor systems with other autonomous vehicles and adjust their operations accordingly in a manner that is more computationally efficient. This can not only increase safety but at the same time reduce computational load required by each individual vehicle.

Temporal And Spectral Dynamic Sonar System For Autonomous Vehicles

The field of autonomous transportation is rapidly evolving to operate in diverse settings and conditions.  Critical to the performance of autonomous vehicles is the ability to detect other objects in the autonomous vehicle’s vicinity and adjust accordingly. To do so, many autonomous vehicles utilize a variety of sensors, including sonar. Although these sensor systems have been shown to improve the safety of autonomous vehicles by reducing collisions, the sensor systems tend to be computationally inefficient.  For instance, the sensor systems may generate large volumes of data that must be processed quickly (e.g., in real or near-real time).  The performance of excessive computations may delay the identification and deployment of necessary resources and actions and/or increase the cost of hardware on the vehicle making it less financially appealing to the consumer. Researches at UC Berkeley proposed algorithms for temporally and spectrally adaptive sonar systems for autonomous vehicles. These allow utilization of existing sonar system in an adaptive manner and in interface with existence hardware/software employed on autonomous vehicles. 

A Battery-Less Wirelessly Powered Frequency-Swept Spectroscopy Sensor

UCLA researchers in the Department of Electrical and Computer Engineering have developed a wirelessly powered frequency-swept spectroscopy sensor.

  • Go to Page: