Browse Category: Communications > Internet

[Search within category]

Interference Management for Concurrent Transmission in Downlink Wireless Communications

It is well known that the communication capacity of wireless networks is limited by interference. Depending on the strength of the interference, there are three conventional approaches to this problem. If the interference is very strong, then the receiver can decode the interfering signal and subtract from the desired signal using successive interference cancelation. If the interference signal is very weak compared to the desired signal, it can be treated as noise. The third and most common possibility is when the interference is comparable with the desired signal. In this case the interference can be avoided by orthogonalizing it with the desired signal using techniques such as time division multiple access (TDMA) or frequency division multiple access (FDMA). In addition to interference, wireless networks also experience channel fading. Conventional approaches to wireless networking attempt to combat fading. Depending on the coherence time of the fading, various approaches have been used. For example, fast fading may be mitigated by the use of diversity techniques, interleaving, and error-correcting codes. Certain diversity techniques, such as the use of multiple antennas, has been shown to help combat fading as well as increase multiplexing gain and system capacity. Multiuser diversity scheme is a technique to increase the capacity of wireless networks using multiple antennas at the base station. In this approach the base station selects a mobile device that has the best channel condition, maximizing the signal-to-noise ratio (SNR). According to some implementations of this approach, K random beams are constructed and information is transmitted to the users with the highest signal-to-noise plus interference ratio (SINR). Searching for the best SINR in the network, however, requires feedback from the mobile devices that scales linearly with the number of users. These implementations also use beamforming, which is complex to implement. In addition, the cooperation requirement is substantial.

Compact Key Encoding of Data for Public Exposure Such As Cloud Storage

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Hyntp: an Adaptive Hybrid Network Time Protocol for Clock Synchronization in Heterogeneous Distributed Systems

Since the advent of asynchronous packet-based networks in communication and information technology, the topic of clock synchronization has received significant attention due to the temporal requirements of packet-based networks for the exchange of information. In more recent years, as distributed packet-based networks have evolved in terms of size, complexity, and, above all, application scope, there has been a growing need for new clock synchronization schemes with tractable design conditions to meet the demands of these evolving networks. Distributed applications such as robotic swarms, automated manufacturing, and distributed optimization rely on precise time synchronization among distributed agents for their operation. For example, in the case of distributed control and estimation over networks, the uncertainties of packet-based network communication require timestamping of sensor and actuator messages in order to synchronize the information to the evolution of the dynamical system being controlled or estimated. Such a scenario is impossible without the existence of a common timescale among the non-collocated agents in the system. In fact, the lack of a shared timescale among the networked agents can result in performance degradation that can destabilize the system. Moreover, one cannot always assume that consensus on time is a given, especially when the network associated to the distributed system is subject to perturbations such as noise, delay, or jitter. Hence, it is essential that these networked systems utilize clock synchronization schemes that establish and maintain a common timescale for their algorithms. With the arrival of more centralized protocols came motivated leader-less, consensus-based approaches by leveraging the seminal results on networked consensus in (e.g., Cao et al. 2008). More recent approaches (Garone et al. 2015, Kikuya et al. 2017) employ average consensus to give asymptotic results on clock synchronization under asynchronous and asymmetric communication topology. Unfortunately, a high number of iterations of the algorithm is often required before the desired synchronization accuracy is achieved. Furthermore, the constraint on asymmetric communication precludes any results guaranteeing stability or robustness. Lastly, these approaches suffer from over-complexity in term of both computation and memory allocation. Moreover, both synchronous and asynchronous scenarios require a large number of iterations before synchronization is achieved. Finally, the algorithm subjects the clocks to significant non-smooth adjustments in clock rate and offset that may prove undesirable in certain application settings.

Compact Key with Reusable Common Key for Encryption

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Extra-Compact Key with Reusable Common Key for Encryption

A major aim of the field of cryptography is to design cryptosystems that is both provably secure and practical. Symmetric-key (private-key) methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. One-time pad (OTP) is a symmetric-type encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as OTP). Asymmetric-type (public-key, asymptotic) frameworks use pairs of keys consisting of a public and private key, and these models depend heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches in practice. Hypertext Transfer Protocol Secure (HTTPS) protocol which is the backbone of internet security uses the Transport Layer Security (TLS) protocol stack in Transmission Control Protocol / Internet Protocol (TCP/IP) for secure and private data transfer. TLS is a protocol suite that uses a myriad of other protocols to guarantee security. Many of these subprotocols consume a lot of CPU power and are complex processes which are not optimized for big data applications. TLS uses public-key cryptography paradigms to exchange the keys between the communicating parties through the TLS handshake protocol. Unfortunately, traditional cryptographic algorithms and protocols (including schemes above and incorporating TLS, RSA, and AES) are not well suited in big data applications, as they need to perform a significant number of computations in practice. In turn, cloud providers face increasing CPU processing times and power usage to appropriately maintain services. In the modern computing era with quantum architecture and increased access to network and cloud resources, the speed and integrity of such outmoded cryptographic models will be put to the test.

Cross-Layer Device Fingerprinting System and Methods

Networks of connectivity-enabled devices, known as internet of things or IoT, involve interrelated devices that connect and exchange data with other IoT devices and the cloud. As the number of IoT devices and their applications continue to significantly increase, managing and administering edge and access networks have become increasingly more challenging. Currently, there are approximately 31 billion ‘‘things’’ connected to the internet, with a projected rise to 75 billion devices by 2025. Because of IoT interconnectivity and ubiquitous device use, assessing the risks, designing/specifying what’s reasonable, and implementing controls can be overwhelming to conventional frameworks. Any approach to better IoT network security, for example by improved detection and denial or restriction of access by unauthorized devices, must consider its impact on performance such as speed, power use, interoperability, and scalability. The IoT network’s physical and MAC layers are not impenetrable and have many known threats, especially identity-based attacks such as MAC spoofing events. Common network infrastructure uses WPA2 or IEEE 802.11i to help protect users and their devices and connected infrastructure. However, the risk of MAC spoofing remains, as bad actors leverage public tools on 802.11 commodity hardware, or intercept sensitive data packets at scale, to access users physical layer data, and can lead to wider tampering and manipulation of hardware-level parameters.

Telehealth-Mediated Physical Rehabilitation Systems and Methods

The use of telemedicine/telehealth increased substantially during the COVID-19 pandemic, leading to its accelerated development, utilization and acceptability. Telehealth momentum with patients, providers, and other stakeholders will likely continue, which will further promote its safe and evidence-based use. Improved healthcare by telehealth has also extended to musculoskeletal care. In a recent study looking at implementation of telehealth physical therapy in response to COVID-19, almost 95% of participants felt satisfied with the outcome they received from the telehealth physical therapy (PT) services, and over 90% expressed willingness to attend another telehealth session. While telehealth has enhanced accessibility by virtual patient visits, certain physical rehabilitation largely depends on physical facility and tools for evaluation and therapy. For example, limb kinematics in PT with respect to the shoulder joint is difficult to evaluate remotely, because the structure of the shoulder allows for tri-planar movement that cannot be estimated by simple single plane joint models. With the emergence of gaming technologies, such as videogames and virtual reality (VR), comes new potential tools for virtual-based physical rehabilitation protocols. Some research has shown digital game environments, and associated peripherals like immersive VR (iVR) headsets, can provide a powerful medium and motivator for physical exercise. And while low-cost motion tracking systems exist to match user movement in the real world to that in the virtual environment, challenges remain in bridging traditional PT tooling and telehealth-friendly physical rehabilitation.

Dynamically Tuning IEEE 802.11 Contention Window Using Machine Learning

The exchange of information among nodes in a communications network is based upon the transmission of discrete packets of data from a transmitter to a receiver over a carrier according to one or more of many well-known, new or still developing protocols. In this context, a protocol consists of a set of rules defining how the nodes interact with each other based on information sent over the communication links. Often, multiple nodes will transmit a packet at the same time and a collision occurs. During a collision, the packets are disrupted and become unintelligible to the other devices listening to the carrier activity. In addition to packet loss, network performance is greatly impacted. The delay introduced by the need to retransmit the packets cascades throughout the network to the other devices waiting to transmit over the carrier. Therefore, packet collision has a multiplicative effect that is detrimental to communications networks. As a result, multiple international protocols have been developed to address packet collision, including collision detection and avoidance. Within the context of wired Ethernet networks, the issue of packet collision has been largely addressed by network protocols that try to detect a packet collision and then wait until the carrier is clear to retransmit. Emphasis is placed in collision detection, i.e., a transmitting node can determine whether a collision has occurred by sensing the carrier. At the same time, the nature of wireless networks prevents wireless nodes from being able to detect a collision. This is the case, in part, because in wireless networks the nodes can send and receive but cannot sense packets traversing the carrier after the transmission has started. Another problem arises when two transmitting nodes are out of range of each other, but the receiving node is within range of both. In this case, a transmitting node cannot sense another transmitting node that is out of communications range. IEEE 802.11 protocols are the basis for wireless network products using the Wi-Fi brand and are the world's most widely used wireless computer networking standards. With IEEE 802.11 packet collision features come deficiencies, like fairness. 802.11’s approach to certain parameters after each successful transmission may cause the node who succeeds in transmitting to dominate the channel for an arbitrarily long period of time. As a result, other nodes may suffer from severe short-term unfairness. Also, the current state of the network (e.g., load) is something that also should be factored. In general, there is a need for techniques to recognize network patterns and determine certain parameters that are responsive to those network patterns.

Methods and Systems for Large Group Chat Conversations

In today’s modern computing environment, the growth of internet speeds and web-friendly devices have enabled a newer generation of telecommunication technology and practice. Electronic chat (messaging) applications have become a common tool for both synchronous and asynchronous communication because of their ease of use and flexibility. Electronic group chat has also become a common tool to facilitate group discussion, including teaching, mentoring, and decision-making. Group chat is a feature in many popular business and social apps that support audio/video web-conferencing, including Zoom, Google, Microsoft, and Facebook. Typical web-conference software may include a window containing sub-windows for a video, presentation, and/or group chat, etc. However, group chat today is limited in its ability to engage all users in a discussion, especially as the group size grows. In a large group chat, if users are all engaged, the resulting firehose of messages makes it impossible to have a coherent conversation. For conveners and participants alike, the results range from mild distraction to unstructured noise, leading people to disengage with the conversation and/or miss important messages, which limits the usefulness of any platform’s group chat feature.

Collision Avoidance in Multi-hop Wireless Networks

In most wireless ad-hoc multi-hop networks, a node competes for access to the same wireless communication channel, often resulting in collisions (interference) and ineffective carrier sensing. These issues have been targeted through the medium access control (MAC) interconnection layer by a variety of channel access schemes, towards improving how the nodes share the wireless channel and achieve a high quality of service. For example, there are contention-based MAC schemes, like Carrier-Sense Multiple Access (CSMA) and Additive Links On-Line Hawaii Area (ALOHA), and contention-free MAC schemes, like time division multiplexing access (TDMA). However, the former is a poor performer in hidden- and exposed-terminal environments, and the latter, where the node system is time-synchronized and the time frame is divided and multiple time-slots are allocated to the nodes, has limited data rates (bandwidth) and undesirable latency. Over the years, there have been many other MAC schemes that address interference and conflict, as well as improving criteria like throughput, fairness, latency, energy, and overhead. These modern protocols implement more sophisticated distributed transmission queues consisting of a sequence of transmission turns that grows and shrinks on demand. However, challenges remain in these more recent MAC protocols, such as long delays for allowing nodes to join the network, and/or the use of transmission frames with complex structures to allocate time slot portions to signaling packets for elections.

Techniques for Encryption based on Perfect Secrecy for Bounded Storage

A major aim of the field of cryptography is to design cryptosystems that are provably secure and practical. Factors such as integrity, confidentiality and authentication are important. Symmetric-key methods have traditionally been viewed as practical in terms of typically a smaller key size, which means less storage requirements, and also faster processing. This, however, opens the protocols up to certain vulnerabilities, such as brute-force attacks. To reduce risk, the cryptographic keys are made longer, which in turn adds overhead burden and makes the scheme less practical. Asymmetric-type frameworks use pairs of keys consisting of a public and private key, and these models depends heavily on the privacy of the non-public key. Asymmetric-based protocols are generally much slower than symmetric approaches. Symmetric-Asymmetric hybrid models have attempted to blend the speed and convenience of the public asymmetric encryption schemes with the effectiveness of a private symmetric encryption schemes. Examples of hybrids include GNU Privacy Guard, Advanced Encryption Standard-RSA, and Elliptical Curve Cryptography-RSA. In the modern computing era with quantum architecture and access to network and cloud resources on the rise, the integrity and confidentiality of such modern cryptographic models will increasingly be under pressure.

A Novel Iot Protocol Architecture; Efficiency Through Data And Functionality Sharing Across Layers

The Internet’s TCP/IP protocol architecture is a layered system design. As such, the functions performed by the TCP/IP protocol suite are implemented at different protocol layers, where each layer provides a specific set of services to the layer above through a well-defined interface. Using this interface, data being received or sent is passed up or down the stack on its way through the network.However, layered design approaches can increase overhead, as each layer incurs additional communication (e.g., additional header field) and processing costs. Furthermore, limiting the flow between layers to data plane information restricts the sharing of control information across layers and may lead to functions being duplicated at different layers. 

Noise Reduction In High Frequency Amplifiers Using Transmission Lines To Provide Feedback

Low noise amplifiers are ubiquitous in wireless data network receivers and radios. Themaximum transmission distance is limited by the receiver noise which is mostly determined by the noise figure of the first amplifier stage, the LNA. Reduction of LNA noise is thus always desirable in that it can increase transmission range or reduce power consumption resulting in higher performance or reduced system cost. This approach lowers the noise of the LNA relative to the other available methods.

Carrier Sense Multiple Access With Collision Avoidance And Pilots (CSMA/CAP)

In most wireless ad-hoc multi-hop networks, a node competes for access to shared wireless medium, often resulting in collisions (interference). A node is commonly equipped with a transceiver that possesses mounted half-duplex omnidirectional antenna. Transmission degradation can occur when terminals are hidden from each other by physical structure, such as buildings. Moreover, since half-duplex nodes cannot receive while transmitting, not all packets sent by different terminals are detected by one another. In fact, no channel-access protocol based on the traditional handshake over a single channel can guarantee collision-free transmissions. Problems can arise in multi-hop wireless networks when hidden terminals, exposed transmitters, or exposed receivers are present.

Magneto-Optic Modulator

Brief description not available

Phased-Locked Loop Coupled Array for Phased Array Applications

Researchers at the University of California, Davis have developed a phased-locked loop coupled array system capable of generating phase shifts in phased array antenna systems - while minimizing signal losses.

(SD2019-340) Collaborative High-Dimensional Computing

Internet of Things ( IoT ) applications often analyze collected data using machine learning algorithms. As the amount of the data keeps increasing, many applications send the data to powerful systems, e.g., data centers, to run the learning algorithms . On the one hand, sending the original data is not desirable due to privacy and security concerns.On the other hand, many machine learning models may require unencrypted ( plaintext ) data, e.g., original images , to train models and perform inference . When offloading theses computation tasks, sensitive information may be exposed to the untrustworthy cloud system which is susceptible to internal and external attacks . In many IoT systems , the learning procedure should be performed with the data that is held by a large number of user devices at the edge of Internet . These users may be unwilling to share the original data with the cloud and other users if security concerns cannot be addressed.

Systems and Methods for Sound-Enhanced Meeting Platforms

Computer-based, internet-connected, audio/video meeting platforms have become pervasive worldwide, especially since the 2020 emergence of the COVID-19 pandemic lockdown. These meeting platforms include Cisco Webex, Google Meet, GoTo, Microsoft Teams, and Zoom. However, those popular platforms are optimized for meetings in which all the participants are attending the meeting online, individually. Accordingly, those platforms have shortcomings when used for hybrid meetings in which some participants are attending together in-person and others attending online. Also, the existing platforms are problematic for large meetings in big rooms (e.g. classrooms) in which most or all of the participants are in-person. To address those suboptimal meet platform situations, researchers at UC Berkeley conceived systems, methods, algorithms and other software for a meeting platform that's optimized for hybrid meetings and large in-person meetings. The Berkeley meeting platform offers a user experience that's familiar to users of the conventional meeting platforms. Also, the Berkeley platform doesn't require any specialized participant hardware or specialized physical room infrastructure (beyond standard internet connectivity).

Automatic Fine-Grained Radio Map Construction and Adaptation

The real-time position and mobility of a user is key to providing personalized location-based services (LBSs) – such as navigation. With the pervasiveness of GPS-enabled mobile devices (MDs), LBSs in outdoor environments is common and effective. However, providing equivalent quality of LBSs using GPS in indoor environments can be problematic. The ubiquity of both WiFi in indoor environments and WiFi-enabled MDs, makes WiFi a promising alternative to GPS for indoor LBSs. The most promising approach to establishing a WiFi-based indoor positioning system requires the construction of a high quality radio map for an indoor environment. However, the conventional approach for making the radio map is labor intensive, time-consuming, and vulnerable to temporal and environmental dynamics. To address this situation, researchers at UC Berkeley developed an approach for automatic, fine-grained radio map construction and adaptation. The Berkeley technology works both (a) in free space – where people and robots can move freely (e.g. corridors and open office space); and (b) in constrained space – which is blocked or not readily accessible. In addition to its use with WiFi signals, this technology could also be used with other RF signals – for example, in densely populated and built-up urban areas where it can be suboptimal to only rely on GPS.

Privacy Preserving Stream Analytics

UCLA researchers in the Department of Computer Science have developed a new privacy preserving mechanism for stream analytics.

Private Keyword Search on Streaming Data

UCLA researchers in the Department of Computer Science have developed a novel way in which to secretly search for and collect relevant information from a streaming database. The invention has application to intelligence gathering and data mining.

  • Go to Page: