Frequently asked questions

What are the goals of Bioindustry 4.0?

The project is a cooperation of 6 major European research infrastructures, which seeks to support the adoption of advanced digital tools by the European biotech sector by developing and refining advanced technologies, which can be offered as new services. The underlying motivations behind the project are simple and two-fold:

1) To support the adoption of advanced digital tools by the European biotech sector (also by reaching out to industry professionals and academic stakeholders alike).

2) To make bioprocess design and control faster, cheaper, and more sustainable.

What new services will Bioindustry 4.0 develop?

Bioindustry 4.0 wants to develop several new services featuring both hard- and software solutions, in order to support research and development in the industrial biotech sector. These services include:

Innovative measurement devices for online monitoring (also know as PAT devices),

  • Innovative measurement devices for online monitoring (also know as PAT devices),
  • Strain discovery and decision support systems to use across different collections
  • AI learning to support both bioprocess design and control
  • Data fabric that can securely integrate data from various points and systems (both on-site and within cloud services)

You can also read a little more about these services over here.

How can I access these services?

Once they are ready to be deployed, these new services will be offered through the research infrastructures that are collaborating and working together on Bioindustry 4.0.

That said, to make sure that they are actually of use and benefit to the community, the project will develop them in conversation with Europe’s biotech stakeholders: we’re involving stakeholders from academia and biotech companies alike throughout the lifetime of the project, are hosting focus groups and holding co-design workshops.

If you’re working in the bioindustry (academically or corporate), are part of the digital scientific community, or are involved in policy making or regulations, please get in touch with us! You can reach us on our LinkedIn page.

What are research infrastructures?

The term research infrastructure refers to an organisation that provides facilities, resources, and services to research communities to support scientific research and to foster innovation.

The services such research infrastructures provide are diverse and can include:

  • Specialised scientific equipment or instruments,
  • Collections and archives (from biological materials to manuscripts),
  • IT or communication services etc.,
  • Data or data sets.

A research infrastructure can be single-sited, distributed (e.g. across a network of hosting countries or institutions), or virtual (offering web-based or virtual research environments and services). A key value of research infrastructures is that they are open to external users, fostering collaboration between scientists from different countries, disciplines, and economic sectors.

One of the European Commission’s key initiatives and strategies center around the development and better use of pan-European, intergovernmental, and national research infrastructures. The advantages of this strategy include – amongst others – an organised and transparent system to share knowledge; pooling of data, facilities, equipment; and avoiding duplication of effort and resources. Bioindustry 4.0 as a project represents the collaboration of 6 major European research infrastructures and their efforts to accompany digitalisation across a diverse industrial sector.

What is a digital twin?

“Digital twin” is a term that is used frequently, broadly, and in varied situations. There are 3 related concepts: digital model, digital shadow, and digital twin. Their definitions can differ depending on context and source, especially since all three are so similar. For that purpose, we want to review their definitions and how we use the terms in Bioindustry 4.0.

A digital model is a virtual representation of a physical counterpart, be it an object, a system, or a process. Digital models are used in various forms – from 3D models to algorithms. Models are often used for simulations or prognosis purposes, and they are extremely useful e.g., in data analysis.

A digital shadow is a digital representation of a physical asset. However, the digital shadow does not just imitate or mirror the given asset, system, or process, it actually collects data from it automatically. This means the digital shadow is synched, or at least up-to-date, with the physical entity it shadows. A biotech company might create a digital shadow of its production line to monitor and analyse its processes – for backup purposes, optimisation, and data-driven decision making.

A digital twin is not merely a virtual representation of its counterpart but involves a real-time (or close to real-time) connection between both, and a two-way information flow – not merely between the asset and its twin, but also from the twin back to the physical asset it represents.

In summary, a digital model does not include any automated data flows between the digital and physical worlds and, unless it’s manually updated, is undeviating. A digital shadow strongly mirrors the physical world and exhibits automated data flow from the physical asset to its virtual shadow. A digital twin has the strongest connection between the physical asset and its virtual counterpart, and it exhibits data flow in both directions.

What is the role of AI in the project?

Artificial Intelligence (AI) plays a significant role in Bioindustry 4.0, particularly in the development of data-driven approaches and decision support systems. AI technologies are used to analyse and interpret the vast amount of data generated in bioprocesses. This analysis can help optimise these processes, maximise the production and reduce manufacturing time and costs. AI is also crucial in the creation and operation of digital twins. A digital twin is a virtual replica of a physical counterpart that mirrors the behaviour and dynamics of that. In the context of Bioindustry 4.0, digital twins are used to better design bioprocesses and enable their real-time online control. AI empowers these digital twins, allowing them to accurately predict and control bioprocesses in real time. AI is a key enabler in Bioindustry 4.0, driving the deep digitalisation of industrial biotechnology towards smart biomanufacturing.

What is a data fabric?

For Biologists: For Industrial Biotechnology (IB) a data fabric provides a seamless interface to data integrated from multiple, distributed (in different locations), and heterogeneous running bioprocesses such as bioreactors, wet labs, or other experimental setups. Once connected it links real-time data streams from sensors and other devices to their metadata, as well as metadata that describe the processes that generate the data, as well as providing visualisation and analytical tools. This enables both a human operator and/or computational algorithms ranging from process models to more advanced tools such as AI and machine learning to control and optimise bioprocess parameters for improved yield, product purity and other factors. The ultimate goal of a data fabric in the field of biotechnology is to accelerate the digital transformation, maximising the value of your data.

For Computer scientists or engineers: A data fabric is a unified framework that seamlessly integrates and manages diverse data sources, allowing for efficient access, analysis, and visualisation of information across and within organisations. For Industrial Biotechnology (IB) a data fabric enables seamless integration of federated data from heterogeneous processes, including live and legacy data from sources such as bioreactors. It links real-time data streams from sensors, data from samples, and other devices with metadata that describes the process, and provides a modular framework that can provide appropriate visualisation and analytical tools. The data fabric makes accessible locked, siloed data and metadata that may be dispersed across either various platforms and communities or obscured by different terms and languages. This data and metadata may be associated with various data warehouses and data lakes via data connectors, but the interface to the data fabric is transparent.