This collection of preprints explores diverse applications of machine learning and signal processing in wireless communications, sensing, and other domains. Several papers focus on enhancing the performance and efficiency of future wireless systems, particularly in the context of integrated sensing and communications (ISAC). Mobarak, Bao, and Erol-Kantarci Mobarak et al. (2025) propose a semi-self sensing hybrid RIS for THz bands, employing deep reinforcement learning (DRL) to optimize beamforming and precoding for sum-rate maximization under sensing constraints. Similarly, Bao and Erol-Kantarci Bao & Erol-Kantarci (2025) introduce a heuristic DRL approach for RIS phase shift optimization in secure satellite communication systems with rate splitting multiple access (RSMA), demonstrating improved performance and efficiency compared to traditional algorithms. Other works tackle specific challenges in wireless systems, such as faster-than-Nyquist equalization using convolutional neural networks (De Filippo et al., 2025), data-aided regularization of direct-estimate combiners in distributed MIMO systems (Gouda et al., 2025), and efficient bearing sensor data compression via an asymmetrical autoencoder (Zhu & Cetin, 2025).
Beyond communication systems, several papers explore novel applications of machine learning and signal processing. Wang et al. Wang et al. (2025) introduce a machine learning-based method for probe skew correction in high-frequency BH loop measurements, offering an alternative to traditional hardware-based approaches. Wohrer Wohrer (2025) proposes a diffeomorphic ICP registration algorithm for point set registration, generalizing the classic ICP algorithm using the large deformation diffeomorphic metric mapping (LDDMM) framework. Balanov, Kreymer, and Bendory Balanov et al. (2025) analyze the sample complexity of multi-target detection, providing insights into estimation limits in noisy environments with applications in cryo-EM. Kumru, Taşdelen, and Köymen Kumru et al. (2025) investigate wideband pulse generation for underwater applications using parametric arrays.
A recurring theme across several papers is the use of deep learning for channel estimation and beamforming optimization. Li et al. Li et al. (2025) propose a keypoint detection network for near-field user localization and channel reconstruction in XL-MIMO systems, significantly reducing computational complexity. Ibrahim et al. Ibrahim et al. (2025) introduce a block phase tracking reference signal allocation method for DFT-s-OFDM to enhance phase noise tracking. Ghassemi et al. Ghassemi et al. (2025) leverage vision transformers for blockage prediction in dual-band mmWave communication, utilizing a hierarchical fog-cloud architecture with generative AI-based compression. These works highlight the growing trend of incorporating deep learning into various aspects of wireless system design.
Further contributions include the development of novel metrics and algorithms for specific applications. Chateauvert, Ethier, and Florea Chateauvert et al. (2025) investigate the impact of geospatial inputs on rural path loss estimation using the ITU-R P.1812-7 model. Xu et al. Xu et al. (2025) propose an SE(3)-based trajectory optimization and target tracking scheme for UAV-enabled ISAC systems. Hawkins et al. Hawkins et al. (2025) demonstrate the advantages of CDMA/OTFS for ISAC, achieving superior sensing performance compared to pure OTFS. Abanto-Leon and Maghsudi Abanto-Leon & Maghsudi (2025) tackle the joint optimization problem of user and target scheduling, pairing, and low-resolution beamforming for ISAC systems, proposing an exact MILP reformulation. Geiger et al. (Geiger et al., 2025a; Geiger et al., 2025b) explore long-range sensing with CP-OFDM and joint optimization of constellation shaping for OFDM-ISAC, respectively.
Finally, several papers address fundamental aspects of signal processing and information theory. Haghshenas, Mahmood, and Gidlund Haghshenas et al. (2025) propose an efficient multi-source localization method using angular domain MUSIC. Moser, Werzi, and Lunglmayr Moser et al. (2025) analyze the integrate-and-fire neuron model from a signal processing perspective, connecting it to the concept of send-on-delta sampling. Zhang et al. Zhang et al. (2025) investigate the performance tradeoff between bistatic positioning and monostatic sensing, proposing a multi-objective optimization framework. These diverse contributions collectively advance the state-of-the-art in signal processing, machine learning, and their applications in various domains, paving the way for future innovations in wireless communications, sensing, and beyond.
Harnessing Rydberg Atomic Receivers: From Quantum Physics to Wireless Communications by Yuanbin Chen, Xufeng Guo, Chau Yuen, Yufei Zhao, Yong Liang Guan, Chong Meng Samson See, Merouane Débbah, Lajos Hanzo https://arxiv.org/abs/2501.11842
Caption: This diagram compares three receiver architectures: a conventional RF receiver, an LO-free Rydberg atomic receiver, and an LO-dressed Rydberg atomic receiver. It illustrates the signal processing chain for each, highlighting key components and noise sources, along with mathematical expressions for the received signal and SNR. The diagram emphasizes the unique capabilities of Rydberg atomic receivers, particularly their sensitivity and simplified hardware requirements compared to traditional RF systems.
This research presents a paradigm shift in wireless communications by introducing Rydberg atomic receivers. These receivers, classified as LO-free and LO-dressed, leverage quantum phenomena to achieve unprecedented sensitivity and performance improvements over conventional RF receivers.
LO-free receivers operate without a local oscillator (LO) and excel in short-range operations. They measure the amplitude of RF signals by detecting changes in the energy levels of Rydberg atoms caused by the Autler-Townes (AT) splitting effect. This approach simplifies the receiver architecture significantly, eliminating the need for complex antenna structures and front-end electronics.
LO-dressed receivers, on the other hand, incorporate an LO and are exceptionally sensitive in long-distance scenarios. They measure both the amplitude and phase of weak signals by using a Rydberg atom-based mixer. This enables highly accurate signal detection at extremely low power levels.
Mathematical models developed for both receiver types account for the interaction with RF signals, various noise sources (including background, quantum projection, thermal, and observation uncertainty noise), and key performance metrics like SNR, capacity, and SER. The SNR for the LO-free receiver is given by: SNR<sub>Ry</sub> = (P<sub>Rx</sub>|h|²/ħ²)(Ω<sub>RF</sub>²/σ²<sub>Ry</sub>), where P<sub>Rx</sub> is the received power, h is the channel gain, ħ is the reduced Planck constant, Ω<sub>RF</sub> is the Rabi frequency of the RF signal, and σ²<sub>Ry</sub> is the total noise variance. The SNR for the LO-dressed receiver is given by: SNR<sub>Ry,LO</sub> = (G<sub>LNA</sub>R<sub>L</sub>D²κ²P<sub>Rx</sub>|h|²)/σ²<sub>y,LO</sub>, where G<sub>LNA</sub> is the LNA gain, R<sub>L</sub> is the PD output impedance, D is the photodiode's responsivity, κ is the intrinsic gain coefficient, and σ²<sub>y,LO</sub> is the total noise variance for the LO-dressed system.
Simulations reveal that LO-dressed systems achieve an astounding ~44 dB SNR gain over conventional RF receivers at -10 dBm transmit power, extending the effective coverage range by a factor of 150. They also support higher-order QAM with reduced SER. LO-free systems, while range-limited, offer superior SNR performance in close proximity (up to ~25 dB improvement). The study also analyzes distortion effects. LO-free systems suffer from ambiguous observations at weak RF Rabi frequencies, while LO-dressed systems experience distortion when strong RF Rabi frequencies exceed the linear dynamic range. This research opens exciting possibilities for future wireless systems, including wideband designs, Rydberg atomic MIMO, and mitigation of hardware impairments.
Physics-Informed Machine Learning for Efficient Reconfigurable Intelligent Surface Design by Zhen Zhang, Jun Hui Qiu, Jun Wei Zhang, Hui Dong Li, Dong Tang, Qiang Cheng, Wei Lin https://arxiv.org/abs/2501.11323
This paper addresses the challenge of designing Reconfigurable Intelligent Surfaces (RIS) by introducing a novel physics-informed machine learning approach. Traditional RIS design relies on computationally intensive full-wave electromagnetic simulations, which are time-consuming and complex. This new method significantly accelerates the design process while maintaining accuracy.
The proposed method combines a Multi-Layer Perceptron (MLP) neural network with a Dual-Port Network (DPN) model. The MLP is trained to predict the impedance matrix Z of the RIS element based on its geometric parameters x<sub>p</sub>. This prediction drastically reduces the reliance on repeated EM simulations during the optimization process. The DPN model, incorporating the diode's circuit parameters x<sub>a</sub>, then uses the predicted impedance matrix to calculate the reflection coefficient S<sub>11</sub>: S<sub>11</sub>(x<sub>p</sub>, x<sub>a</sub>) = F(Z<sub>MLP</sub>(x<sub>p</sub>), x<sub>a</sub>). This combined MLP-DPN model allows for fast evaluation of the RIS element's performance under various structural and diode configurations, enabling efficient optimization.
The effectiveness of this method was demonstrated through the design and fabrication of a 3-bit RIS element. The MLP model, trained on 2000 samples, achieved impressive accuracy with a Mean Squared Error (MSE) of less than 0.1 and a Mean Absolute Error (MAE) of less than 0.07 for predicting the impedance matrix elements. The design process for each of the three RIS elements, operating at 2.68 GHz, 3.14 GHz, and 3.3 GHz, took less than 0.2 hours, a substantial improvement over traditional methods. Measurements of the fabricated 16x16 RIS, based on the 3.3 GHz design, showed eight distinct phase states at 3.5 GHz with reflection magnitudes greater than -3.93 dB, confirming the design's accuracy. The close agreement between simulated and measured far-field scattering patterns further validates the method's reliability. This physics-driven machine learning approach offers a significant advancement in RIS design, paving the way for the development of complex RIS structures for diverse applications.
Exploring the Potential of Large Language Models for Massive MIMO CSI Feedback by Yiming Cui, Jiajia Guo, Chao-Kai Wen, Shi Jin, En Tong https://arxiv.org/abs/2501.10630
Caption: (a) Analogy between sentence error correction by LLMs and CSI feedback. (b) Autoencoder architecture for CSI compression and reconstruction. (c) Proposed LLM-based CSI feedback framework, where pre-processed CSI is treated as "words" and fed to a pre-trained LLM for reconstruction, analogous to sentence correction.
This paper explores the novel application of Large Language Models (LLMs) for compressing and reconstructing Channel State Information (CSI) in massive Multiple-Input Multiple-Output (MIMO) systems. Efficient CSI feedback is crucial for massive MIMO performance, but traditional methods struggle with the high dimensionality of CSI data. This research leverages the powerful denoising and error correction capabilities of LLMs, typically used in natural language processing, to address this challenge.
The proposed framework treats CSI vectors as "words" in a sentence, drawing a parallel between CSI reconstruction and sentence error correction. It utilizes a pre-trained GPT-2 model, adapted for CSI data through specialized pre-processing, embedding, and post-processing modules. Pre-processing involves transforming the CSI matrix to the angular-frequency domain using DFT (Hₐ = FₐHᵢₙ) and normalization. The embedding module maps the CSI data into token representations suitable for the LLM, incorporating positional encoding to capture frequency correlations. Importantly, most of the LLM's parameters are frozen during training, preserving its generalization ability and minimizing computational overhead. The post-processing module then transforms the LLM's output back into the reconstructed CSI matrix.
Simulations using a realistic 3GPP scenario demonstrate significant performance improvements over conventional small DNN models and state-of-the-art methods like TransNet. The LLM-based approach achieves superior Normalized Mean Square Error (NMSE) and Generalized Cosine Similarity (GCS) across various compression ratios, with the advantage widening at higher compression levels. For instance, it achieves up to a 2 dB NMSE improvement over the small model at a compression ratio of 32. Furthermore, the LLM-based method exhibits remarkable training efficiency, requiring minimal data and demonstrating robust generalization to unseen scenarios. This stems from the LLM's ability to leverage knowledge acquired during pre-training on massive text datasets. This research opens exciting new directions for applying LLMs in wireless communications, potentially leading to more efficient and robust massive MIMO deployments.
This newsletter highlights a convergence of cutting-edge techniques in machine learning and signal processing to address critical challenges in wireless communication and sensing. The exploration of Rydberg atomic receivers presents a radical departure from traditional RF technology, promising substantial gains in sensitivity and performance by leveraging quantum phenomena. Meanwhile, the application of physics-informed machine learning to RIS design offers a practical path to optimizing these complex structures efficiently, significantly reducing computational burdens. Finally, the innovative use of Large Language Models for CSI feedback in massive MIMO systems showcases the potential of adapting pre-trained models to solve complex wireless communication problems, offering improvements in both performance and training efficiency. These advancements collectively represent a significant step towards realizing the full potential of future wireless networks and sensing systems.