NTT DoCoMo has released details of its latest work with Nokia to develop an AI-native Air Interface, what is known as the 6G AI-AI.
Its most recent demo moved from a proof of concept carried out in a lab using signal generators into a real world indoor setup with physical antennas transmitting and receiving data across test spectrum, with a GPU server carrying out ML calculations to characterise the air interface.
The test used an AI-based baseband, where the air interface processing is not calculated according to the reference signal between the transmitter and receiver, but instead uses AI to carry out channel estimation and other processing blocks. The use of AI/ML for signal processing tasks, such as channel estimation, channel equalisation, and de-mapping might create an AI-native Air Interface in 6G.
The operator said that in wireless processing using AI, the transmission process improves communication performance by designing and learning optimal modulation schemes for each specific radio propagation environment. By not using traditional reference signals inserted into the transmitter to estimate the propagation channel between the transmitter and receiver, an AI-AI estimates the propagation channel. Since no reference signal is used, higher transmission efficiency is achieved.
Unlike traditional communication systems that passively adapt to the environment using static algorithms, the concept of an AI-AI is that it could actively and automatically design signal processing schemes through acquired channel fading status to optimise the effectiveness and reliability of the data transmission under the constraints of the current environment. (Link)
You can see a visualisation of this that TMN covered back at MWC2023, where Nokia presented a proof of concept that replaced traditional processing blocks with ML technology.
That same year, we also reported on work Rohde & Schwarz had carried out on a neural transmitter along with Nvidia. Its concept was of a neural receiver demonstrator, replacing conventional signal processing blocks with neural models (trained ML models).
R&S said that its simulations suggest that a neural receiver would increase link-quality and throughput compared to the current high-performance deterministic software algorithms used in 5G NR.
In the absence of live data models for ML training, test and measurement equipment generates various data sets with different signal configurations to train machine learning models for signal processing tasks.
In the setup at the Rohde & Schwarz booth, the signal generator emulates two individual users transmitting an 80 MHz wide signal in the uplink direction with a MIMO 2×2 signal configuration. Noise is applied to simulate realistic radio channel conditions (in the demo we saw a bottle of water was placed between the two antennas in the picture above). A satellite receiver acts as the receiver, capturing the signal transmitted and then providing that as data via a real-time streaming interface to a server. There, the signal is pre-processed, with this data set serving as input for a neural receiver implemented using NVIDIA Sionna, a GPU-accelerated open-source library for link-level simulation. Sionna enables prototyping of complex communications system architectures, supporting the integration of machine learning in 6G signal processing.
As part of the demonstration, the trained neural receiver is compared to the classical concept of a linear minimum mean squared error (LMMSE) receiver architecture, which applies traditional signal processing techniques based on deterministically developed software algorithms (see image above).
The challenges of the AI-AI is to be able to understand in very near real time the physicaly and RF environment, so as to design the best waveform required for the use case. In this case the network as sensor itself can provide data for the AI-AI. It also places huge importance on the machine learning techniques that are applied.
A project such as CENTRIC outline many of the issues in its 2022 first call paper, Towards an AI-native, user-centric air interface for 6G networks, discussing the applicable Machine Learning techniques.
A paper that looks at the techniques for environment sensing data acquisition is “Wireless Environment Information Sensing, Feature, Semantic, and Knowledge: Four Steps Towards 6G AI-Enabled Air Interface,” written by researchers at China Mobile Research Institute and Beijing University.
NTT DoCoMO and Nokia this week
Then this week NTT DoCoMo said it had tested out Nokia’s ML-based baseband indoors, using 4.8 GHz spectrum – see picture below.
In the test conducted at NTT Yokosuka Research and Development Center, radio waves were transmitted in a laboratory (7×13 meters) to measure throughput characteristics. AI-based baseband transmission and reception processing was implemented using software on a GPU server. The 4.8 GHz frequency band was tested via a hardware for Software Defined Radio (SDR). Using 5G New Radio (NR) as the base radio communication method, the receiving antenna was mounted on a movable trolley to perform measurements. NTT DoCoMo said that compared to conventional methods without AI, throughput characteristics were measurably improved by 6-16%, and stationary tests conducted at 25 points along the route confirmed an 18% improvement. The results demonstrate that AI-enabled modulation and demodulation technology improves communication performance in both mobile and stationary environments.
Going forward, the operator further testing will be conducted in more complex indoor and outdoor environments to clarify the range of possible applications for AI-enabled modulation and demodulation technology and to verify technologies for broad commercial deployment.
So what’s the advantage of an AI-native AI? One aspect could be that the Air Interface becomes more intelligent, so that it learns what use case it is trying to fulfil within a given environment. For instance, in a factory it could optimise for sensor connectivity when required, and then for video surveillance. But a more easy to realise benefit could just be increased throughput by increased Signal to Noise Ratio and reducing Block Error Rate. This would oallow perators to achieve similar results but with lower power demand.
Through the standards
3GPP’s Release 18 formulated the introduction of ML into the Air Interface. A study on AI/ML for NR Air Interface started in 2022 with the target to “explore the benefits of augmenting the air-interface with features enabling improved support of AI/ML based algorithms for enhanced performance and/or reduced complexity/overhead”.
This Qualcomm presentation (Towards an AI-native communication systems design) outlines how Release 18 focussed on three key wireless AI use cases.
- Channel feedback via more efficient, predictive Channel State Information (CSI) feedback to improve user downlink throughput and reduce uplink overhead.
- Beam management/ prediction in time/spatial domain for overhead and latency reduction, improving beam selection accuracy, especially useful for mmWave systems.
- Positioning accuracy enhancements for different indoor and outdoor scenarios including, e.g., those with heavy nonline-of-sight conditions
This slide from an Ericsson presentation shows how work on the AI-AI will continued through 5G Advanced Releases (3GPP 19 into mid-2025) and into the 6G time frame. (The road of artificial intelligence towards the 6G air interface – Ericsson)
Comments
0