To ensure that SMEs have a front-row seat to telecoms standards making, our Standards Champion, Andy Reid, attends key standards meetings to observe, participate in conversations and report back on key themes and discussion points.
Here, Reid shares his reflections from FYUZ.
What are TIP and FYUZ?
The Telecom Infra Project (TIP) was founded as an industry association nearly ten years ago with the objective of establishing greater levels of openness and interoperability between component supply across the full range of telecoms infrastructure. This came at the time when the ‘softwarisation’ of the network was becoming a reality through major industry initiatives such as network functions virtualisation (NFV) and software defined networks (SDN). As a result, separation of hardware and software components has been a significant concern for TIP. This introduced a strong tie with cloud technologies, and the ‘hyperscalers’ -notably Meta - have been major participants in TIP.
TIP’s working method is fairly practical and hands-on. Rather than attempting to define standards from scratch, it sets out to profile/refine standards from other SDOs to improve their effective interoperability, to test and demonstrate that interoperability based on practical use cases, and then share the results.
FYUZ (pronounced ‘fuse’) is the TIP’s annual get together and this year spanned three days in Dublin. It combines sessions reporting on the progress of the profiling and testing, sessions on hot topics impacting future activity, and an exhibition of practical solutions using the TIP architecture.
However, like other initiatives aimed at increasing openness and a greater diversity in the telecoms supply chain (notably O-RAN Alliance), success has been harder to achieve than maybe was envisaged by the founders ten years ago – the big vendors are still the big vendors – and integration across different vendors still seems the exception rather than the rule. As I shall report, AT&T gave a particularly interesting answer to this question at this year’s FYUZ.
TIP Organisation
TIP currently has five technical project groups:
- Open RAN - considers the radio part of mobile networking and whose architecture is the basis for the O-RAN Alliance (so there is a close relationship here).
- Open LAN - focuses on technologies based on fixed access networks including WiFi (Open WiFi project) and open LAN switching (OLS) which controls and networks WiFi access points;
- Open optical and packet transport (OOPT) - addresses transport infrastructure from base station (open RAN) and home/business site (open LAN) all the way to the data centre, with a focus on hardware and software separation, and the use of ‘white boxes’ as the basic hardware components;
- Neutral hosting and infra sharing - concerned with creating the business models and associated infrastructure, notably LAN and RAN, which can be shared between different operators;
- Telco AI - concerned with enabling network operators to make use of AI.
In addition, there is organisation around test and validation, as well as training and certification.
Key Themes form FYUZ 2025
There was a great deal of information presented over the course of the three days as well as insights from side conversations. I’ve tried to rationalise this and capture significant points as well as recurring themes.
What is openness?
This was not necessarily an obvious theme from the presentations but was certainly a recurring theme of many private conversations. For the moment, the major operators still predominantly buy from the key vendors and so there was a nagging question as to whether the major operators are really committed to openness. A notable recent example of this was when AT&T tendered for RAN equipment fully according to O-RAN open specifications and then awarded a single contract for all components to Ericsson.
In the first morning, AT&T were challenged on this and gave an interesting answer – for them, open RAN is about common management across the network. They explained that they already have a diverse estate of equipment from different vendors as a result of their size and history of mergers etc. The openness of open RAN architecture is critical for them to apply common management and to offer common services and new features across this diverse estate. They were also quite open in saying that for them, SMEs with innovative ideas needed to work through a major vendor. And this probably reflects the situation for most major operators.
It's then particularly interesting to put this alongside some of the presentations and discussion on private/campus networks and neutral hosts. This suggests that diversification may be happening but not exactly as originally foreseen. The original idea was that existing operators would be able to separately supply different components and easily integrate them themselves as the interfaces are all sufficiently validated.
What seems to be emerging now are new neutral hosting and private networking opportunities that are fulfilling bespoke needs as well as the many ‘not-spots’ in mobile coverage. These can cover transport systems, in-building networking, campus networks, industry specific sites, smaller rural common infrastructure projects, etc. These sit outside the need for full scale integration with the management systems of the existing operators and open up better opportunities for new hosting operators and new vendors. However, the adherence to the common TIP open architecture means that services can be delivered across these neutral/private hosts.
If this is the case, the next few years - together with new 6G technologies - will lead to diversification across the industry, albeit not as originally envisaged.
AI
A more holistic view of AI
AI in the context of networks is generally split into two separate areas under ‘AI for networks’ and ‘networks for AI’. However, this year at FYUZ, much of the discussion was more holistic, treating the two as symbiotic. One presenter suggested that it was appropriate to take four views; in addition to the two already mentioned, they included ‘network of AI’ to capture developing agentic AI architecture, and a even more holistic ‘AI native architecture’.
Some of this direction was nicely summarised in a keynote presentation by Dimitra Simeonidou of Bristol University, showing that when sensing/ISAC information is brought together with automated control, you create a truly ‘cognitive’ network.
Agentic
The need for agentic AI was a strong theme across the three days. At one point, someone did raise the question from the floor of ‘why not centralise all information and create one common LLM?’, but this received a swift and fairly universal rejection as both impractical and inefficient. Distributed agentic solutions were presented as the way forward.
It also so happened that the meeting had been preceded by a hackathon (sponsored by AWS) specifically on developing agentic solutions.
Role of private/campus networks and neutral hosting in AI
A consequence of following an agentic approach which is inherently distributed, is that it lends itself to edge compute solutions. Many previous discussions on this over the last decade – what I might call a ‘5G’ view of edge compute – were based around edge compute hosted by the major mobile operators. In contrast, most of this discussion – what I might call a ‘6G’ view of edge compute – centred on use cases where the edge compute was more often integrated as part of a private or campus network, or perhaps provided by neutral hosts. This blended with the openness theme noted above.
Shift to focus on data sources
If we think of an AI system (in this case mainly thinking of the inference engine, not the learning process) as a complex function which takes input data, often very complex, and selects specific actions which are the configurations/controls to a complex system, there was a discernible shift in the focus of the use cases. In the past – again, let’s say in the ‘5G world’ – the focus was generally on the efficiency of the system being controlled by the actions coming out of the AI system. As we enter a ’6G world’ there seems to be a much greater focus on the input data coming into the inference engine. Moreover, this is also the data needed for learning/training the AI system.
Some of this is probably emerging from the growing realisation that many use cases start with sensors and devices gathering data from the real geographic world. The network itself is one quite helpful example. This shift in focus is in part what leads to a more distributed agentic architecture.
Problems with reliability/hallucinations
Another recurring theme was the limitations regards the reliability of AI decision making. Figures quoted were often only in the 60-70% range (which itself is a significant improvement) with many decisions completely false ‘hallucinations’ from the AI system. At one level, it was surprising, possibility alarming, how unconcerned people seemed to be by this. The general feeling was very much that this will get better with more work.
The lack of good representative training data was a related recurring theme – in line with many similar events over the last few years. There was also a comment that not everyone is equal in this regard, noting that a major RAN vendor has access to all sorts of usage and configuration data direct from their own equipment which they do not necessarily share with anyone else.
On a different tack, AWS suggested that one way of mitigating hallucinations was to avoid the use of completely free natural language and constrain the language of the AI system to a domain specific “Network language model”. In essence, this would stop a lot of hallucinations simply because they cannot be expressed by the network language model.
Other significant themes
While openness and AI, especially in the context of 6G, were the dominant themes, there were other topics of note.
Sensing and ISAC
A number of things seem to be coming together at this point in time:
- There is a growing number of sensors measuring all sorts of physical parameters important to a wide variety of industry verticals from factory automation to farming to traffic monitoring, etc;
- There are a growing number of connected cameras producing video streams from which features and other information can be readily extracted;
- Optical fibre systems can detect vibrations in the fibre and can detect anything from vehicle movements to oceanic earthquakes;
- The latest generation of mobile RAN signals can extract other environmental information from signal reflections and their changes.
These seem to be leading to a new focus on the data sources coming into AI systems, and the local processing of the vast amounts of data with agentic AI inference engines to extract the key features from these data sources. The widely noted implication that there is a need for distributed AI compute together with extensive upstream capacity, follows directly.
Automation
Automation of network configuration and operational processes was one of the original use cases for ‘AI for the network’ and remains so. However, there seems to be a growing reflection of key aspects of the real world, which contextualises and constrains some of the early hype in this area.
- The actual current levels of efficiency are often already quite high, or have good operational/reliability reasons for being the way they are, and the practical opportunity for improving efficiency by rapid reconfiguration is often limited;
- When there is genuine massive overload on the network (for example major sports events, emergency scenarios, etc), more physical resource is often the only practical mitigation (reconfiguring existing resource is like rearranging the deckchairs on the Titanic);
- Many AI systems remain sufficiently unreliable that human oversight is still needed – one presenter cited automation ‘level 4’ as the target with no mention of ‘level 5’.
- By and large, automation does not completely replace humans, it more often just changes the boundary between humans and the software systems.
Private/campus networks and neutral hosting
This is now a separate project group in TIP, and was producing a lot of the more interesting use cases. The delays in deploying 5G standalone (SA) have often been cited as the main reason for the slow uptake on URLLC and mMTC by industry verticals, but it may also be the case that an implicit ‘one size fits all’ approach in these 5G services offered though public networks using the major vendors, doesn’t meet the diverse needs of each industry vertical.
- Neutral hosting seems to give an opportunity to work more directly with industry verticals, allowing each industry to evolve from their current modes of working in ways that suit them, rather than having to suddenly become a 5G or 6G use case conforming to every aspect of mobile architecture;
- Coupled with this, it allows industries to develop solutions in a more technology neutral way making best use of different access technologies, be it 5G, WiFi, LoRaWAN, cabled, etc.
- Private/campus networks can provide the level of isolation and security required by some industry and organisations and/or require fewer changes to existing security arrangements to protect their sensor data and AI controls etc.
What does this all mean?
Thinking of the opportunities for innovation by agile SMEs, I found this meeting to be rather encouraging. While the message that the major operators are likely to still want to work with the major vendors was reinforced, so were the wide range of new opportunities around private/campus networking as well as neutral hosting.
The suggestion seems to be for applications to focus on the data sources, the AI processing and how this works with current industry vertical systems and processes, and be more neutral to whether the access technology is 4G, 5G, 6G, LoRaWAN, WiFi, cabled, etc.
For neutral hosting, the suggestion seems to be providing good coverage in otherwise uncovered places, offering good transparent and secure tenancies to the applications/wider networks that are being hosted.