Making the most of connected machines
By Ian Raper, Regional Vice President, Riverbed Australia & New Zealand
Automation and the rise of smart, Internet-connected manufacturing equipment (known as M2M or the Internet of Things depending on the device) stand to deliver previously unimaginable productivity gains and cost-savings. While these new technologies promise to simplify the production process, this influx of connected machines and devices also complicates the network used to deliver these efficiency gains.
With each new machine connected to a network, finding faults and troubleshooting application performance issues when they occur could be like finding a needle in an ever-increasing haystack.
A major source of delay in the troubleshooting process can be found in the way IT teams are structured; specific parts of the infrastructure (e.g. network operations; server operations; application operations) are run by different groups. Each of these teams has visibility into a different part of infrastructure and, traditionally, has had no way to share information effectively. This siloed approach means that the first step in the troubleshooting process is usually a counterproductive round of finger pointing between groups as none have visibility of the whole.
But it doesn’t need to be like this.
The key to effective, real-time performance optimisation and troubleshooting is visibility. If each of these teams has a deep, end-to-end view within and across the complex networks and applications that make up the modern manufacturing process, any performance issue can be quickly resolved.
In fact, these issues can even be fixed before they have an effect on the production line by using the same automation that is becoming prevalent on factory floors.
At Riverbed, we’ve developed solutions to minimise downtime in the production environment and avoid potentially devastating disruptions that can reverberate through the supply chain.
Clearing the fog of war
The first strategy is to have the right data, as a single source of truth, available to the right team(s) at the right time. This means issues can be addressed while everyone in the teams can see the impact instead of blaming each other for what’s going wrong. With this data available to all IT teams, each can have their own workflow to address performance issues, whether it’s end-user experience, application performance, network performance, or even infrastructure stress/failure.
Below is an example of the kind of data that can be quickly shared to the right teams before, or instead of, getting into the war room:
Rapid response
A second strategy is similar to a trend currently making its mark on manufacturing floors across the globe: automation. Not only is it possible, but it is best practice, to know when issues occur, or are likely to occur, by automatically baselining key measurements that trigger a rapid, proactive approach to “detect and fix” before end users even notice. In the example below, throughput has gone way above what is considered normal and chances are that some applications and users might be crowded out by what has happened during this time frame.
With this example, to ensure mission-critical applications in your network are running smoothly during such bandwidth spikes, you can set the Quality of Service (QoS) for these apps to have a higher priority so that productivity is not impacted. The only way for this approach to work successfully and efficiently is to have complete visibility of what apps are running and how much bandwidth they normally consume, as in the example below. Once you have this information, you can set your traffic policies at the branch with QoS.
In an industry where more devices are connected to the network each day and interacting with each other across shared networks, complete visibility across these complex architectures, applications, and end-user domains is the only way to quickly identify and fix any performance issues.
Not only can this single source of information ensure that the right IT team has the right information at the right time to troubleshoot issues, but performance benchmarks can be set so that any deviation from standard operations initiates a proactive fix so the production line never even notices a problem.