Microcomputers
The director knows the machine is running. How long, under what load, why one shift produces 30% fewer parts — that stays unknown.
An IT approach to manufacturing
Industrial automation has two poles. On one side — expensive solutions from major vendors: production management systems, digital twins, integrations worth millions of dollars. On the other — manual data collection, where information is recorded sporadically and lost. As IT specialists with observability experience, we decided to apply the same principles used for monitoring servers and applications to production equipment. Metrics are collected from sensors, transmitted to the cloud, visualised on dashboards. Alerts arrive when something goes out of bounds. Logs are saved for analysis. Manufacturing is the same source of telemetry as IT infrastructure. The question is how to collect that data and what to do with it.
The blind zone
Imagine a workshop. Fifty machines, three shifts, two hundred people. Some of the equipment is modern, with controllers and displays. Some is mechanical, without electronics, models 30-50 years old. Operational data is gathered in fragments: someone wrote something down, someone remembered, someone passed it on verbally. At the end of the month a report appears. The numbers are there, and they explain nothing. Why does one area consistently fall behind? Why do parts break more often on this machine? Why is electricity consumption rising while output volume stays the same? The main thing — there is no way to compare. How did this area work a year ago? How did we handle a similar order last quarter? What readings came up before the last breakdown? Without accumulated data, these questions remain without answers.

How we solve it
We apply to manufacturing the same principles that work in IT: observation, logging, metrics, alerting. The data source changes — machines, sensors, controllers in place of servers. Network. All equipment needs to be connected. Where possible — low-voltage cabling is installed. Where cable runs are impractical — nodes with 4G modems are placed. Each node works autonomously: if connectivity drops, data accumulates locally and is sent once connectivity returns. Hardware. Microcomputers with attached sensors are distributed across workshops. The configuration depends on the task: one microcomputer can serve several machines, or one machine can have several data collection points. Cloud. All data flows into a single system where monitoring, alerts and analytics run. Computation happens in the cloud, resources are rented as needed.


Equipment in workshops
For modern machines with controllers, integration happens via Modbus or Ethernet. The data is already inside the machine — it needs to be pulled and gathered in one place. For old machines without electronics, external sensors are installed, leaving the mechanics untouched. Modbus energy meters are mounted on a DIN rail in the electrical cabinet and show electricity consumption in real time: operating mode, load, anomalies. Three-phase models reveal the full picture for each phase. MEMS accelerometers on the housing track vibration — it reveals changes in the mechanisms at work. Temperature sensors on critical units record deviations from the norm. On lathes and milling machines, spindle RPM sensors show operating modes. On machines with hydraulics or pneumatics — pressure sensors. Oil and coolant level sensors signal the need for maintenance. Cycle counters log the number of operations.



Legacy equipment
A separate story — machines without electronics. It could be a lathe 30-50 years old, running since the factory was built, or a modern model assembled to a classical design without digital controllers. Such machines are reliable, replacement makes no sense, and they resist daily observation with logged readings — new machines with controllers make that easy. On such equipment you can install a vibration sensor, an oil level sensor, a power meter — these are examples, a partial list. After installation, you see: whether the machine is running or idle, under load or at idle speed, whether vibrations are within norm or deviations have appeared, how much oil remains. The ability emerges to track equipment condition between scheduled inspections.

The question is how long this machine will keep running without service. With sensors that becomes predictable.
Air quality and ventilation
For some production sites, air control is a safety matter. In paint shops, woodworking plants, repair shops, suspended dust or vapour can be critical. Air quality sensors with Modbus or RS485 interfaces measure PM2.5 and PM10 particle concentrations, CO2 levels, volatile organic compounds (VOC). This data feeds into the same monitoring system. Alerts can be configured: if dust concentration exceeds the norm, the system notifies those responsible. It can be linked to ventilation: when CO2 rises, supply fan capacity increases automatically. Similar sensors are installed inside ventilation systems: whether extraction is running, whether airflow exists, what the throughput is at each stage. This shows the state of engineering systems in real time, and allows awareness of problems long before the workshop becomes hard to breathe in. Examples of air quality sensors
Safety and access control
The same infrastructure that collects equipment data serves safety tasks. It helps to understand this: it is an event model. Every event is logged — a hatch opened, a door closed, a presence sensor triggered. Even if the information is of no immediate use, it may be needed to investigate an incident or for analytics. Hold on to data that seems meaningless.
Lifting equipment
Presence sensors on cranes and forklifts. Load cells for weight control. Position sensors for tracking movements. Automatic stop when a person is detected in a hazardous zone.
Access control
Barriers with licence plate recognition. Zoning of the territory by clearance levels. Logging of all vehicle and pedestrian passes.
Perimeter monitoring
Door and hatch opening sensors. Access control to technical rooms. Notifications on unauthorised access.
Engineering systems
Leak sensors in critical zones. Fire detectors within the unified monitoring system. Pressure and temperature control in pipelines.
All events go into a single system. One dashboard shows equipment status alongside safety events.
Cloud and analytics
The whole system of many microcomputers scattered across the site is managed from a single centre. That can be the cloud or a local server on the premises. The cloud is simpler for starting out: there is no need to buy and maintain server hardware. Data is stored in several copies on different servers — if something happens to one, information stays intact. For most monitoring tasks this suffices. A local server makes sense when the data is sensitive and must stay within the site perimeter, or when real-time response is needed with no dependence on the internet. For data collection and visualisation we use open source tools: Prometheus for metrics, Grafana for dashboards, Loki for logs. Alerts arrive in messengers or by email.

The data shows the cause. The next step is a plan of action.
The system reveals the real running time of each machine, downtime and its duration, anomalies in energy consumption, the effectiveness of shifts and areas, trends in how readings change. This makes it possible to plan maintenance before breakdown: if vibration starts rising, intervention is needed after a certain number of hours.
Data and people
A separate layer of analytics emerges when technical metrics are matched with data on shifts and specific operators. You can see which machine ran during which shift, who was assigned to it, which readings were reached. This helps to surface systemic issues: perhaps the issue lies with the logistics of material supply on the night shift. Or one machine takes more time to retool. Or a certain type of part is consistently produced more slowly.
How we work
We start with understanding the task: what observability is needed, what already exists on the site, which data would be valuable to collect. If low-voltage infrastructure exists — good, it speeds up deployment. If it does not — we work with 4G modems or bring in contractors to install the network. Next — design: which sensors, where to place them, how to link them into a single system. Then deployment: equipment mounting, software configuration, integration with existing processes. After launch — training: we show how to work with dashboards and alerts. And support: we monitor the state, update firmware, expand the system as needed. You can start with one area or even a few machines, see how it works, and scale further.
Who this is for
Such monitoring is a tool for those ready to work with data. First you need to accumulate information. Then — try to cross-reference it by different criteria: by time, by shifts, by product types, by areas. Find trends. Further on — using both log review and conversations with people on shift, look for explanations and solutions. This is useful for those improving processes at a plant. For safety teams who need the full picture of events. For management who need aggregated analytics across the whole production. For process engineers who want to understand how the equipment actually runs. Data on its own decides nothing. When a large amount of scattered information becomes unified by attributes and available for analysis — the ability emerges to make decisions grounded in facts, with guesswork set aside.
Discuss a task
Tell us about your production — we will see what can be measured and what benefit it will bring.