This newsletter delves into cutting-edge advancements in signal processing and wireless communication, showcasing novel approaches to enhance efficiency and performance. A recurring theme across several papers is the exploration of hardware-efficient implementations for complex algorithms. For instance, Gomes et al. (Gomes et al., 2024) introduce a novel Time-Domain Clustered Equalizer (TDCE) for chromatic dispersion compensation in optical fiber communication. By leveraging the concept of tap overlapping effects, they achieve significant energy savings compared to traditional frequency domain equalizers, highlighting the importance of implementation strategies in hardware design.
Moving beyond traditional techniques, researchers are increasingly leveraging the capabilities of intelligent reflecting surfaces (IRS) to optimize wireless communication systems. Siddhartha et al. (2024) propose a novel approach that exploits the beam-split effect in IRS-aided systems. By strategically scheduling users on different subcarriers, they demonstrate the potential to achieve optimal array gain across the entire bandwidth. In a similar vein, Li et al. (2024) investigate the use of movable antenna arrays for over-the-air computation. Their proposed algorithm optimizes antenna positions to minimize computation error, demonstrating the potential of this technology for enhancing AirComp networks.
Several papers also focus on developing robust and efficient algorithms for challenging signal processing tasks. Guo et al. (2024) introduce a novel wavenumber-domain ellipse fitting (WDEF) method for near-field channel estimation. By accurately characterizing the channel in the near-field region, their approach overcomes limitations associated with traditional Fresnel approximation techniques. Addressing the growing need for privacy-preserving data analysis, Yu et al. (2024) propose a distributed optimization-based approach for privacy-preserving maximum consensus. Their method leverages virtual nodes and a carefully designed initialization process to protect private data without compromising accuracy.
Finally, two papers explore the potential of graph-based signal processing for analyzing complex datasets. Pakiyarajah et al. (2024) present a method for adaptive subspace reconstruction in graph signal sampling, demonstrating its effectiveness in handling irregularly distributed data. Meanwhile, Sun et al. (2024) introduce hypergraph wavelets for representing higher-order relationships in data, showcasing their application in analyzing spatial transcriptomics data for Alzheimer's disease research. These papers collectively highlight the growing significance of graph-based approaches in diverse domains.
Rate-Splitting Multiple Access for Coexistence of Semantic and Bit Communications by Yuanwen Liu, Bruno Clerckx https://arxiv.org/abs/2409.10314
Caption: Achievable rate regions for RSMA, NOMA, FDMA, and time-sharing in a 6G uplink with semantic and bit users.
This paper delves into the critical question of how different multiple access (MA) schemes can effectively manage the coexistence of semantic and bit users in the uplink of future 6G cellular networks. The authors focus on a scenario where multiple semantic users, primarily interested in the meaning of transmitted messages (e.g., text), share the uplink with a traditional bit user requiring the original, unaltered message. The study compares three prominent MA schemes: orthogonal multiple access (OMA), non-orthogonal multiple access (NOMA), and rate-splitting multiple access (RSMA).
A key challenge in this coexistence scenario stems from the unique nature of semantic communication. Unlike traditional bit-based communication, semantic communication introduces the constraint of "understandability". This means that the received message, even if not perfectly reconstructed at the bit level, must preserve its intended meaning. This constraint necessitates a different approach to both the design and evaluation of MA schemes in such a mixed-user environment.
The authors meticulously characterize the achievable rate regions for all three MA schemes, taking into account both the semantic rate, measured in semantic units per second (suts/s), and the traditional bit rate. They address the non-convexity issues arising in the rate expressions for NOMA and RSMA by employing successive convex approximation (SCA) algorithms to derive optimal solutions.
Simulation results consistently demonstrate the superiority of RSMA over NOMA in terms of the achievable rate region. This advantage stems from RSMA's inherent flexibility in decoding procedures, a direct result of its message-splitting approach. While frequency division multiple access (FDMA), with its isolated resource allocation strategy, proves more suitable when semantic rate requirements are low, RSMA emerges as the dominant performer in most other scenarios, particularly when high semantic rates are desired.
Interestingly, the study reveals a key difference between RSMA's behavior in bit-only communication versus the coexistence scenario. In traditional bit-only communication, RSMA consistently outperforms OMA without requiring any time-sharing. However, in the coexistence scenario, time-sharing between RSMA and OMA can sometimes be necessary to achieve the largest possible rate region.
The paper further dissects the nuances of RSMA design, highlighting the differences between bit-only and coexistence scenarios. In the coexistence scenario, the authors propose a novel approach: splitting only the bit user's message into multiple streams (up to N<sub>s</sub>+1, where N<sub>s</sub> represents the number of semantic users). This strategy ensures decoding flexibility while simultaneously guaranteeing "understandability" for all semantic users. This contrasts with conventional RSMA for bit-only communication, where splitting N-1 users is optimal for a total of N users. This distinction underscores the significant impact of semantic communication constraints on RSMA design principles.
Furthermore, the study reveals substantial differences in power allocation strategies between the two scenarios. In the coexistence scenario, the bit user may need to sacrifice some of its achievable rate to ensure that all semantic users achieve the required signal-to-interference-plus-noise ratio (SINR) for maintaining "understandability". This trade-off is governed by a predefined sentence similarity threshold (S<sub>th</sub>) that dictates the minimum acceptable level of semantic similarity.
Train-On-Request: An On-Device Continual Learning Workflow for Adaptive Real-World Brain Machine Interfaces by Lan Mei, Cristian Cioflan, Thorir Mar Ingolfsson, Victor Kartsch, Andrea Cossettini, Xiaying Wang, Luca Benini https://arxiv.org/abs/2409.09161
Caption: This diagram illustrates the Train-On-Request (TOR) workflow for Brain-Machine Interfaces (BMIs). In this approach, the BMI model can be updated on demand during Session 2 (and following) when performance drops, as indicated by the frowning emoji, using new data from the user. This allows for continuous adaptation and improved accuracy without lengthy recalibration sessions.
This paper tackles a significant hurdle in the development of user-friendly Brain-Machine Interfaces (BMIs): the challenge of adaptation. BMIs rely on electroencephalogram (EEG) signals, which are inherently variable, differing significantly between individuals and even fluctuating within the same individual over time. This variability necessitates frequent recalibration of BMI models, a process that is often time-consuming and disrupts the user's experience.
The authors introduce a novel workflow termed Train-On-Request (TOR) to address this adaptation challenge. TOR empowers on-device adaptation of BMI models, allowing them to adjust to real-time performance fluctuations. Instead of relying on lengthy and disruptive recalibration sessions, TOR allows users to fine-tune the model on demand. This is achieved by prompting the user to provide a small amount of new data when the system's accuracy dips below a predetermined threshold.
To further enhance TOR's performance over time, the authors incorporate continual learning techniques. Their research reveals that an Experience Replay (ER)-based TOR approach yields impressive results, achieving up to 92.33% accuracy while requiring 46.67% less training data compared to conventional methods. This reduction in training data translates to faster adaptation and less user effort.
Demonstrating the practical feasibility of TOR for real-world applications, the authors successfully deployed it on a low-power GAP9 microcontroller unit (MCU). Their findings indicate that TOR can achieve accurate classification in a mere 21.6 ms per training step, consuming only 1 mJ of energy. This remarkable efficiency suggests that TOR can be seamlessly integrated into wearable BMI devices, paving the way for more practical and user-friendly BMIs in everyday life.
The development of TOR represents a substantial leap forward in making BMIs more adaptable and user-friendly. By enabling on-device, real-time adaptation, TOR has the potential to democratize BMI technology, making it more accessible and appealing to a wider range of users, even in challenging non-clinical environments.
This newsletter highlights the ongoing evolution of signal processing and wireless communication. The exploration of hardware-efficient algorithms, such as the Time-Domain Clustered Equalizer for optical fiber communication, underscores the increasing emphasis on bridging the gap between theoretical advancements and practical implementations.
Simultaneously, we see the rise of innovative techniques like intelligent reflecting surfaces (IRS) and movable antenna arrays, pushing the boundaries of wireless communication capabilities and paving the way for enhanced data rates and more efficient network operations.
The development of algorithms tailored for complex, real-world challenges is also evident. From the wavenumber-domain ellipse fitting method for near-field channel estimation to the distributed optimization approach for privacy-preserving maximum consensus, researchers are tackling critical issues that are essential for the advancement of various technologies.
The increasing prominence of graph-based signal processing is another key takeaway. Whether it's for handling irregularly sampled data or analyzing intricate relationships in biological datasets, graph-based approaches are proving their versatility and effectiveness across diverse domains.
Finally, the exploration of Rate-Splitting Multiple Access (RSMA) for coexisting semantic and bit communications, along with the innovative Train-On-Request (TOR) workflow for adaptive BMIs, exemplifies the continuous quest for enhanced efficiency, user-friendliness, and seamless integration of cutting-edge technologies into our daily lives.