Career

+ =
×

Business Information

+ =
×

Five Trends That Will Impact The Energy Industry

Artificial intelligence, real quality control

Winning the industrial AI game: Why labeled failure data, not algorithms, is key

Five Trends That Will Impact The Energy Industry

Five Trends That Will Impact The Energy Industry

Theses top five trends in 2018 will positively impact the oil and gas industry. (Source: Flutura)

As energy processes and industrial assets become digitized, they climb on an exponential growth curve instead of a linear growth trajectory. This digital transition is ripe with many possibilities, whether it is in artificial intelligence (AI), remote diagnostics using digital twins or next-generation usage-based operating models powered by sensor data. Oil and gas companies need to prepare for five trends.

Trend 1: Reimagine industrial AI-powered operating models

Most industrial AI applications are geared toward providing operational efficiency impacting the cost side of the balance sheet such as increased uptime and well yields and reduced HSE risks. For example, Flutura is powering a “digital prognostics as a service” model for a major upstream company where instead of reacting to asset downtimes, the company can proactively complete remote diagnostics and in-person interventions based on fault mode predictions from an AI model that is watching real-time equipment sensor streams.

Innovative business models will transform the market landscape for drilling service providers, equipment manufacturers and owner operators. Winners and losers will be decided by the ability of these traditional industrial sectors to deeply embed AI into core equipment and processes. This requires that many entrenched players reimagine their business operating models.

Trend 2: Upstream AI impacting well and equipment outcomes

AI platforms in 2017 were generic and untuned to the nuances of oil and gas. There has been a great deal of momentum in upstream areas. For example, Flutura’s Cerebra industrial AI application center has preconfigured solvers for ultraspecific upstream problems such as deepwater asset diagnostics, hydraulic fracturing, LNG and more. Expect to see more AI apps this year that will impact measurable outcomes using algorithms highly specialized to solve high-impact problems.

“Vanilla” data science will not suffice to solve mission critical problems in the oil and gas industry. As deep-learning algorithms become democratized, the importance of novel AI applications that solve a specific and complicated problem will increase. These applications will become more important than a horizontal AI platform, which requires immense tuning for the industry context.

Trend 3: Innovations in industrial sensors to see blind spots

A primary challenge in the practical execution of AI projects are blind spots in vital signals. For example, an upstream company realized through its work with Flutura that while its rotary assets had sufficient instrumentation (e.g., lube oil pressure and temperature, rpm, torque, etc.), there were critical blind spots when it came to vibration sensors and shock sensors that were a crucial signal for the deep-learning algorithm to spot anomalies leading to failure. Some specific blind spots where significant sensor innovation will be seen this year include the detection of fluid and gas quality using optics based on differential interferometry, tampering of oil containers, emissions and noise anomalies in close proximity to rotating assets.

Making assets and process context aware requires increasing the asset sensitivity to events both within and around them. Model quality is directly correlated to the quality of sensor streams. The better the sensors get, the better the AI models become.

Trend 4: Edge intelligence

There are two types of intelligence: informational and actionable. For example, if a leased asset in an assetas- a-service offering is repeatedly being misused by a worker, edge intelligence will notify the supervisor to intervene. This decision-making loop cannot afford the time needed to ship massive sensor event data over the network and then wait for the AI layer at the center to respond. Localized sense and respond layers are needed to be operationally effective.

Edge intelligence is ideal for “fail operational” behaviors where an asset or process can complete its core operation even when a part of it fails. Edge intelligence also is ideal when reliability and latency are important. Large oil and gas projects have thousands of sensor events streaming across myriad wells with some decisions needing to be reliably made within milliseconds.

Trend 5: Sensor data highways

Today’s data networks are insufficient to keep up with the high rates of data transmission required by rising sensor density on upstream processes and assets combined with increased frequency of transmission. Companies like Sigfox and Ingenu are focused on building dedicated nextgeneration sensor data transmission infrastructures for moving sensor data. It will be like getting a dedicated lane on national highways where sensor data streams can move data that support machine-critical upstream processes and equipment.

Close

Artificial intelligence, real quality control

What do process chemical manufacturing and cooking have in common?

smart industry iot iiot industrial internet of things digital transformation

Flutura's Derick Jose

Both have recipes—cookbooks and standard-operating-procedures that serve as recipes for process-chemical manufacturing. Both need quality inputs. Both need dynamic control as the process unfolds—adding the right amount of pepper or calibrating temperature, for example. Both need feedback signals—a chef sampling his dish midway or quality signals in process chemicals.

The problem facing the chemical-manufacturing industry is that, while there are standard-operating-procedures, the do not take into account the dynamic conditions in which actual manufacturing processes happen. For example, the mixer’s vessels would have been used, leaving residuals; the ambient temperature may have moisture or dust that influences product quality.

As a manufacturer there are specific blind spots:

 What influence do each of these factors play in changing product quality outcomes? (Which factors are noise and which factors are signals?)
  •  What is the rank of each influencer variable? (Some variables may have disproportionately more influence on quality outcomes than others.)
  •  What is the expected quality outcome based on current conditions and what would be the next best frontline action to take in order to reduce wasteful production?
I can illustrate this with a real-world story. We recently executed a plan for an industrial-glue manufacturer and scaled it across multiple production lines across countries. The problem: the customer was facing a massive challenge; wasted production cost them hundreds of millions of dollars because of the stringent quality controls in the industry. They did not have the tools to pinpoint what influenced the quality outcome.

The solution: we built surgical AI apps to process multiple signals as input lab-quality signals, sensor anomalies, process signals and ambient condition data to predict quality of current production and correlations between various parameters and quality outcomes.

The best aspect of the process was that we closed the decision loop with the frontline folks by translating complex statistical signals into a simple quality “smiley” that indicates if all is going well. When the smiley account changed, production was shut down and forensics initiated to nail the specific parameter that caused quality deviations.

The learnings? If you are in process-chemical manufacturing and want to stay competitive, consider embedding AI into your frontline-manufacturing actions to boost quality outcomes. And before getting started, ask yourself:

  •         Which product lines experience the highest quality rejection rates? Can we isolate the top three product lines?
  •         What is the economic impact of wasted quality? A best-case estimate? A realistic estimate?
  •         If the quality of product is enhanced by 3-5 percent, how much economic value would it unlock in the first year, second year and third year?
  •         What data pools exist? What about sensor data, lab data, SCADA/PLC data, maintenance ticket data, operator data?
  •         Which OT/IT systems hold this event data?
  •         Who can be the executive champion who can shepherd the project?
  •         What if initial results from the AI processes can be consumed in 90 days?

I believe that the process-manufacturing industry has to view industrial AI as a massive shift, not a temporary phenomenon. Rather than being paralyzed by threats, embracing industrial AI will boost efficiency.

The risk of digital inaction is greater than the risk of no returns.

Derick Jose is co-founder and chief data scientist at Flutura Decision Sciences and Analytics.

Close

Winning the industrial AI game: Why labeled failure data, not algorithms, is key

Artificial intelligence is slowly but steadily embedding itself into the core processes of multiple industries and changing the industrial landscape in so many ways — be it deep learning-powered autonomous cars or bot-powered medical diagnostic processes. The industrial and energy sectors are not immune to the disruption that comes with embracing AI. As upstream and downstream companies gear up for AI, there is one important lesson I want to share that might seem counterintuitive. For the successful execution of an AI project, the data matters more than the algorithm. Seems odd, right?

Let me start by sharing a recent experience. Flutura was working with a leading heavy equipment manufacturer based in Houston that has numerous industrial assets deployed on rigs globally. These rotary assets were quite densely instrumented; they have great digital fabric consisting of pressure sensors, flow meters, temperature sensors and rpm sensors all continuously streaming data to a centralized data lake. The problem the manufacturer was trying to solve was how to “see” typically unseen early warning signals of failure modes in order to reduce multimillion-dollar downtimes.

In order to do this, every time a piece of upstream equipment went down, we needed to label the reason why it went down. It might have been motor overheating, bearing failures or low lube oil pressure, but until we know the specific reason why equipment goes down, it’s difficult to extract the sequence of anomalies leading to the failure modes. While this company had a massive sensor data lake, running into terabytes, the information was useless until the failure labels were embedded within the assets’ timeline. In order to tag all “failure mode” label blind spots, we configured an app that helped institutionalize this process. Every time a maintenance ticket was generated for unplanned equipment downtime, the app would step through a workflow at the end of which the failure mode for the asset was tagged onto the timeline.

So, here are three questions to ask your team before you embark on an AI project:

  1. Top three failures: Which are the top three high-value failure modes that are most economically significant?
    Rationale: All failure modes are not the same. Isolating and prioritizing the vital few failure modes from the significant many saves money.
  2. Tagging process: When equipment goes down, is the failure mode automatically generated by the asset or does it need a “human in the loop” to tag failures?
    Rationale: Some machines are programmed to record the failure mode event as a historian tag, others need an external process.
  3. Breadth and depth: What is the breadth and depth of equipment data available in the data lake?
    Rationale: In order to model the entire set of data, one needs to have maintenance tickets, sensor streams and ambient context. In order to “see” sufficient instances of a failure, the sensor data lake needs to have at least one to two years of operational data.

To conclude, it’s easy to get carried away by the hype surrounding AI and algorithms. But the key to winning the game is finding the answer to the above three data-tagging questions. Good luck as you introduce AI to unlock gold in your data.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Close

Practical AI Lessons Learned

Scaling IIoT successes

How IIoT platforms with AR/VR help OEMs reduce operating costs

Practical AI Lessons Learned

An operator digitized sensor data and integrated physics-based models with statistical data-driven models to predict risk of failure. (Source: Flutura Decision Sciences and Analytics)

A confluence of groundbreaking technologies bundled with next-generation business models is poised to transform the oil and gas industry. It’s history in the making. This convergence of digital technologies (the Industrial Internet of Things, artificial intelligence [AI], autonomous self-healing assets, drones, etc.) is creating entirely new ways of operating a producing well and massively transforming outcomes like increasing production and decreasing nonproductive time (NPT). It’s interesting to view some real-world examples of transformations that are solving real-world problems, and the takeaways are five lessons learned in the execution process.

Predicting fracture pump failures

Flutura worked with one of the world’s largest original equipment manufacturers (OEMs) of fracture pumps. Fracture pumps are used in harsh conditions, and as a manufacturer the drilling service providers and owner/ operators expect the OEMs to have an intimate understanding of the current health of the fracture pump and the potential ways it could succumb to a fault mode. To make this transition from the electromechanical world to the digital world, the customer created a digital twin of the fracture pump, including its various sub-systems (pumps, engine, transmission, etc.), sensor signals (engine rpm, transmission oil pressure), trips, alarms and fault modes on Cerebra. Once the digital twin was created, a “digital umbilical cord” was created using Cerebra’s algorithmic state assessment module providing remote digital diagnostics for the pump and predicting potential failure modes with associated confidence for the field force to automatically create tickets. This, in addition to reducing downtime of nodal assets in the field, created a new predictable recurrent revenue pool for the customer through its “digital health monitoring as a service” offering.

A digital twin reduced downtime of nodal assets in the field and created a predictable recurrent revenue pool for the customer. (Source: Flutura Decision Sciences and Analytics)

AI in FLNG carriers

A major global LNG carrier approached Flutura with an operational problem to solve. Floating LNG carriers are used to ship LNG from point to point. This is a complex and delicate process since gas is stored at -162 C (-260 F) for ease of transport, which takes up about 1/600th the volume of natural gas in its gaseous state. There is a lot of cryogenic and leakage risk associated with this process. The global carrier wanted an “edge solution” completely self-contained on the ship to diagnose and predict risky outcomes. It created a digital twin of the LNG carrier using Cerebra modules, and the solution’s advanced deep-learning neural networks detected temperature and leakage anomalies that human eyes could not detect in an unsupervised fashion.

An LNG carrier created a digital twin to detect temperature and leakage anomalies. (Source: Flutura Decision Sciences and Analytics)

AI in subsea separators

Subsea separators increase the oil recovery rate by separating a well stream into gaseous and liquid components. As oil production and recovery rates are directly correlated to separator performance and health, their importance to monitor the health in real time and proactively predict potential failure modes enter the radar. This industrial OEM was trying to solve three problems using digital platforms:

  • Remotely diagnosing the digital health of subsea separators;
  • Reducing NPT by having a prognosis for the failure modes; and
  • Reducing high operational costs associated with expensive unwanted trips to the rig.

The operator digitized sensor data from inlet pressure, choke pressure, flow rates (gas, oil and water) and differential pressures/fluid levels. It also integrated its physics-based models with statistical data-driven models to predict risk of failure.

What has been learned?

Mindsets eclipse toolsets. The race to digital transformation in operating wells is not just about digital toolsets. It’s about changing mindsets in ways that are new. The veterans of the oil and gas industry have become accustomed to tangibile and reliabile outcomes. Digital is intangible and iterative as the AI algorithms learn and adapt. This requires executives to think about operations differently and reimagine the way they view upstream operations.

Converting outputs to outcomes. Digital involves executing a great deal of physics and math-based models on sensor streams. These digital outputs then need to be translated into a meaningful operational outcome, like increasing first-time resolution of upstream assets and reducing NPT, which are then mapped to dollars realized. Data need to be converted to dollars.

Sensor lakes. One of the foundational pieces for digital transformation is having a critical mass of labeled fault mode data. This creates a trail of “digital bread crumbs” and leaves a marker on the asset time line, indicating how the machine specifically failed. This information is guzzled by the deep neural network to discover weights to be attached to minimize prediction deviation. Examples of labeled fault data include electric motor failure, hydraulic leakage and stick-shift events.

Intelligent industrial diagnostic “bot” assistance. As the experienced workforce retires, it is important to codify that knowledge for the future workforce. Industrial bot assistants can codify frontline experience and head office intelligence into a comprehensive diagnostic template and make it accessible via “don’t make me think” voice interaction instead of complex dashboard interaction. For example, Flutura created an “Ask Cerebra” diagnostic bot for catwalks that helped a large OEM frontline team step through a diagnostic workflow to understand fault modes codified from years of experience. With the advent of natural language processing algorithms powered by deep learning, field technicians can interact with the asset diagnostic applications through voice interactions, just as bots help in customer service.

Integrate heart and mind. Digital transformation is a complex process requiring tact in dealing with sensitive human issues in a complex ecosystem. This has required seasoned leadership that can understand the transformative potential of digital technologies but can also provide a human-centric approach to solving problems.

These are deep digital shifts that have reached an inflection point, creating a massive transformation of oil and gas operations to move beyond “vanilla” condition-monitoring systems. The challenges are more human than technological. They require oil and gas leadership to rethink operating models, business models and economic models. And this requires them to create a blueprint to respond to these tectonic irreversible shifts and to think that the status quo is not an option as the digital wave seeps into the electromechanical world.

Close

Scaling IIoT successes

During a panel at the SIIA Propelling IoT: Emerging IoT Business Opportunities event in Houston, TX, I shared examples of how the IIoT affects business outcomes in the industrial sector. Anyone who keeps up with the IIoT knows it is often a situation of crawl, walk, run before you start seeing real ROI. But once you are running, it is simple to duplicate that success, whether you are reducing operational cost, increasing yield and quality of products, reducing downtime or improving safety.

The level everyone is trying to reach is prognostics—what are the next best actions to undertake? The people in the field don’t have time to think about this stuff. They just need to know what is failing or when it is going to fail or what maintenance do they need to do next.

Prognostics is key in answering these questions—connecting live machine data with tagged events, labeled data and maintenance data.

With the right data sets, a company can move toward algorithmic spare-part refurbishment while generating mechanical repair or work orders with specific instructions on the repair, all while publishing a maintenance-repair video on a working 3D digital twin. There is a lot of opportunity for oil and gas companies to provide these types of Industrial IoT solutions in their operations, and some of the high-value equipment they can use includes hydraulic fracking units, cat walks, top drives, mud pumps, etc. For petrochemical plants and refineries, some of the equipment includes cracking units, reactors, valves and pumps.

Using artificial intelligence, companies can start turning the ship around on their operations—moving from a reactive mode to a predictive/preventative modes, which enable companies to unlock revenue streams buried in the data.   

I recommend having a workshop with the right decision makers and stakeholders within an organization, equipped with a budget and IIoT-value KPIs. Find a business problem to solve first, versus trying to sell a solution that may not fit the business case. Do your homework on the target market, target customer and know your customer and the business problems they are trying to solve. Then bring in subject-matter experts that know the process, the equipment and operations very well.

Rick Harlow Headshot
 

- Rick Harlow ( Executive vice president of Americas at Flutura Decision Sciences and Analytics)

Close

How IIoT platforms with AR/VR help OEMs reduce operating costs

Kurt from our Houston office recently visited an upstream operation in Eagle Pass, Texas. At this operation, there was a variety of mission-critical equipment operating and collecting crucial production data points. It took Kurt a good six hours to get the facility. It was time-consuming, tedious and it cost money. We were asking ourselves a simple question: “How can we reduce Kurt’s visits to Eagle Pass by combining the 3D immersive experience of a virtual reality (VR) tool with the deep advanced analytical capabilities of an IIoT platform?” That question led to the development of augmented reality (AR)/VR apps that gracefully compliment an IIoT system.

Take, for example, a pump or a motor that commonly powers upstream operations. Our IIoT platform’s anomaly detection algorithms flag and mark cases of motor temperature overheating. These anomaly markers are laid out on a 3D model of the asset and reliability engineers, one sitting in Houston and another sitting in Oslo, can experience the unhealthy motor from the comfort of their headquarters. The sensor streams from the motor are streamed from historian tags in real time to the IIoT platform. The IIoT platform is then integrated with the AR/VR app, which enables the engineers to perform multiple asset examination operations. They can get “exploded” and “zoomed in” views of the asset and can rotate the asset across the 3D axis to pinpoint what is going wrong and where it’s going wrong.

In addition to experiencing the asset the reliability, engineers at headquarters can use voice and hand-based gestures to understand the sequence of events leading up to a high-value failure mode.

These features are extremely useful for optimizing upstream operations, reducing trips and shaving off costs in a hyper-competitive marketplace. As Harvey Firestone said, “Capital isn’t so important in business. Experience isn’t so important. You can get both these things. What is important is ideas. If you have ideas, you have the main asset you need, and there isn’t any limit to what you can do with your business and your life.” These new ideas promise to change the way OEM and operators run their upstream operations.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Close

What can healthcare teach industrial folks?

Deploying an AI-driven IoT platform: 21 questions to ask

7 Best Practices for Applying Industrial Artificial Intelligence Bots

What can healthcare teach industrial folks?

Submitted by Derick Jose on Mon, 08/07/2017 - 11:02

The industrial world (oil and gas, utilities, refineries, discrete/continuous manufacturing) is in the early stages of radical transformation powered by artificial intelligence (AI). AI promises to bring unprecedented efficiencies and competitive advantages by radically transforming. business models. As the race for AI-powered transformation accelerates in the coming years, it would be wise to learn from experiences of the healthcare industry, which experienced a similar trajectory.

Example 1: Anomaly detection in sensor streams

Patients suspected of having arrhythmia will often get an electrocardiogram (ECG) in a doctor's office. However, if an in-office ECG does detect a problem, the doctor prescribes to the patient a wearable ECG that monitors the heart continuously for two weeks. The resulting heartbeat data is then forensically examined (second by second) for any indications of problematic arrhythmias, some of which are extremely difficult to differentiate from harmless heartbeat irregularities. AI algorithms powered by deep-learning techniques can detect 13 types of arrhythmia from ECG signals, helping doctors detect and treat heart problems and extend human life.

How is this applied to the industrial sector? We have been working to detect unusual “electro-mechanical” rhythms in sensor data from variety of upstream assets (like frac pumps sectors), thereby diagnosing the presence of specific fault modes such as impending lube-oil issues or gearbox issues, and extending asset life

Example 2: Diagnostic image detection

Diabetic retinopathy (DR) is the fastest growing cause of blindness and more than 415 million diabetic patients are at risk worldwide. If caught early, the disease can be treated; if not, it can lead to irreversible blindness. Unfortunately, medical specialists capable of detecting the disease are not available in many parts of the world where diabetes is graphic for blog postprevalent. Deep-learning algorithms examine pictures of the back of the eye and rate them for disease presence and severity. Severity is determined by the type of lesions present (e.g. microaneurysms, haemorrhages, hard exudates, etc.), which are indicative of bleeding and fluid leakage in the eye.

How can this be applied to the industrial sector? We are seeing an increased adoption of drones for pipeline inspection in midstream and downstream parts of the oil and gas business. These drones generate terabytes of images scanned by AI algorithms to detect the presence of leaks and fractures. AI can definitely see things the human eye misses in images.

Just getting started? Find our IIoT Launch Template right here

Example 3: “Diagnostic bot in your pocket”

How does having a “doctor in your pocket” feel? That’s precisely what healthcare start-ups are doing by cutting down on unnecessary consultations and developing AI that can engage patients just like a physician. The use of bots by healthcare AI companies (HealthVault, Babylon Health and Medwhat) to diagnose health conditions is increasingly popular.

How is this applied to the industrial sector? We are developing AI bots that can run diagnostic tests on sensor data and highlight the presence of poor quality lube oil, asset misuse on rigs, vibration anomalies and more. These insights can reduce logistic costs associated with troubleshooting remote assets in an industry where the price of a barrel dictates which companies survive and which go out of business.

The healthcare industry is a front-runner in applying AI to mission-critical tasks, whether they be image-detection, anomaly-detection, sequence-detection or bots to automate diagnosis. Industrial AI practitioners can learn a great deal from the healthcare successes and failures, and apply those learnings in the industrial sector.

Jack Welch aptly said, “If the rate of change on the outside exceeds the rate of change on the inside, the end is near.” It’s important for industrial companies to ask the question “Is the rate of change outside greater than that inside?”

Derick Jose is co-founder and chief data scientist at Flutura Decision Sciences and Analytics.

Close

Deploying an AI-driven IoT platform: 21 questions to ask

My experience with Fortune 100 global energy, engineering, and OEM companies, tells me that a tectonic shift is happening in the energy industry, a shift that promises to change the game in the marketplace forever, leaving the traditional asset and Capex-based business models behind. Increasingly, we are seeing that AI-driven IoT platforms are becoming the digital nervous systems of 21st century industrial companies. IoT platforms are going to be the foundation on which new business models are going to be created — powering new revenue pools and expanding the engineering organization’s foray into other value-added services that bring predictable revenue streams. As a result, the choice of an AI-driven IoT platform is an extremely strategic one which cannot be reversed easily.

As the engineering world collides with the digital world, there is a great deal of confusion and our team felt that more than finding answers, the right questions needed to be asked. Having been soaked in the AI and industrial IoT world, we would like to share a list of 21 mutually exclusive and collectively exhaustive questions spanning core dimensions in applying AI to industrial context.

Instrumenting asset blind spots 

In order to assess the scope of the work, one of the initial tasks at hand is to figure out the “machine learnability” quotient of the asset. Most electromechanical assets have rudimentary instrumentation and may not have the sensors required to capture information in order to model the asset. In order to get context of the remote asset, here are a few questions that reveal the instrumentation and asset landscape:

1.What events are being emitted by the asset today?
2.What events are not being broadcasted by the asset that need to be instrumented or “sensor enabled” going forward for the AI algorithm to learn from?

Sensor health monitoring

One of the most common issues faced in the rugged industrial context is the malfunctioning of sensors which can result in corrupt data being fed to the AI algorithms. As there are hundreds and thousands of assets and sensors, it is very important to know what percentage of the assets and sensors are transmitting healthy sensor data. Basically, we need to look for the absence of events from assets of interest. For example, some sensors had battery issues and were not transmitting:

3.Does the AI-driven IoT platform have dashboards that reveal the number of sensors not broadcasting state information?
4.Do the sensor health monitoring dashboards reveal the length of time that an asset has not been communicating?
5.Does the sensor health monitoring dashboard flag events with spurious data or incorrect data?

AI-driven signal detection

AI is where deep mathematics meets machines; AI and deep learning algorithms crawling in search of patterns to predict asset downtime, asset failure, and asset optimization:

6.Which AI algorithms need a data scientist to configure, and which algorithms can be executed by an asset engineer?
7.Can the AI platform signal anomalies in real time?
8.Can the AI platform express the taxonomy of anomalies experienced by an asset?
9.Can the AI platform correlate the anomalies to asset outcomes (downtime, remaining useful life) that need to be modeled?
10.Can the AI platform have multiple models blended together as an ensemble?
11.Can the AI platform predict in real time or is the prediction in offline mode?

Industrial data product creation   

Industrial data products are a set of AI solvers for real-world business problems. The apps can answer a correlation question or trigger an action signal. As engineers start layering intelligence over their assets using data products, here are a few questions that can help:

12.Can the IoT platform guide users to create edge data products using APIs or using workflows?
13.Can the IoT platform create forensic data products that go beyond “dot on the map” to identify interesting correlations not ever seen before?
14.Can the IoT platform triangulate signals across heterogeneous data pools, sensor historian data streams, maintenance events, ambient asset conditions and other data streams?

Scalability of sensor event streams 

The industrial IoT world will absolutely generate many more events than the consumer world. Take for example the Bombardier C-Series jetliner with Pratt & Whitney’s engine which has 5,000 sensors embedded within it. During a 12-hour flight, 10 GB of data per second is emitted, resulting in 844 TB of data. The scale required for data ingestion is infinitely higher. With that in mind here are a few questions on scalability:

15.What is the peak emission rate of my asset events? Is it thousands per hour, millions per hour?
16.What is the peak ingestion rate of the IoT platform?
17.How much time will it take for an alarm event to reach the central command center? Is it milliseconds or seconds or minutes?

Pricing model of AI-driven IoT applications

The industry is in the early stages of its evolution and multiple pricing modelsexist. Over a period of time, depending upon the complexity of industrial process and its linkage to a financial outcome, the pricing model will eventually stabilize. In the meantime, here are a few questions to ask:

18.Should pricing be set per asset or asset type?
19.Should pricing be set per app or cluster of apps?
20.Event volume-based pricing offered by players like Splunk?
21.Outcome-based pricing like pay per thrust in aviation engines?

Closing thoughts 

With all the considerations above, the choice of an industrial AI-driven IoT platform for assets is a multidisciplinary affair requiring three lenses to look through: the financial lens, the engineering lens and the software lens. Taking the time to consider all of these variables before you begin down the AI path is critical, but can make the task a lot less risky.

Albert Einstein once said, “We cannot solve our problems with the same thinking we used when we created them.” We hope the above questions serve as an actionable AI playbook as you plan out your strategy for an industrial IoT initiative.

Close

7 Best Practices for Applying Industrial Artificial Intelligence Bots

7 Best Practices for Applying Industrial Artificial Intelligence Bots

As experienced industrial employees leave the workforce, AI bots are filling in the gaps they leave behind. Here are some best practices.

Bots and virtual diagnostic agents are increasingly entering into our daily habits and helping with intermediate tasks which were typically human driven. For example, when we shop online, a conversation bot is activated to understand our purchase intent and, depending on our inputs, a recommendation bot suggests possible items to purchase. Another example is from the health care industry – Flow Health, a start-up, has AI bots that diagnose potential health conditions before a person actually sees the doctor. The consumer

World is slowly being taken over by transaction automating bots.

At Flutura, we have been focussed on applying AI bots to industrial context – for example e have bots facilitate reliability engineering and maintenance diagnostic tasks. Based on our experiences, here are 7 key best practices for applying AI bots in a practical way.

Best Practice-1: Map the high impact tasks

How do you begin your journey to introduce AI bots? It all starts with identifying one high value task.

Which tasks are to be “botified”?

For example in upstream oil and gas, a field service engineer regularly diagnoses irregularities in pumps, motors, winch drives etc. when the arise. Which is a high value failure mode worth diagnosing with a bot instead?

What is the frequency of execution of tasks?

For example a bot may be redundant in identifying potential tasks whereas a mud pump, which is used in harsh conditions, may be down more frequently.

Best Practice-2: Target a measurable operational outcome

Flutura was working with a leading industrial chemical manufacturer where they experienced $16 million worth of reactor downtime caused by valves behaving poorly. An AI powered valve diagnostic bot now helps the company spot valves with a poor health score and recommends the next best action. This bot is expected to bring down the economic impact of downtime by 40%.

Best Practice-3: Decoding users intent from free text using classifier models

One of the primary tasks of the AI bot is to infer the user's intent. For example, based on the query text, does the company want to prioritize the alarms to respond to anomalies or should a root cause analysis of events leading to equipment's failure mode be conducted instead? The microservices decoding the user intent should be robustly tested so that the industrial engineers' experience in the interaction is optimal.

Best Practice-4: AI Bot Integration to adjacent operational systems

Flutura in building a diagnostic AI bot for rod pumps. These do not exist in isolation. The diagnositc needs to “listen” to alarms generated from electronic condition monitoring systems and other data loggers and have the ability to automatically rise a ticket alerting to the potential anomaly.

Best Practice-5: Bot integration with AR/VR applications for collaborative trouble shooting

One of the best use cases to reduce operational cost for upstream oil and gas is collaborative remote trouble shooting in an effort to reduce rig visits. For example, a maintenance engineer sitting in Houston can collaborate with a reliability specialist sitting in Norway by wearing a virtual reality headset. They can dig deep into the real-time MWD (measurement while drilling) logs of a drill bit in Saudi Arabia. This ability to globally collaborate reduces the operational cost associated with expensive rig visits and increases first time resolution of trouble tickets. In this context, Cerebra's diagnostic AI bots are integrated with remote VR/AR apps from Metaverse and provide an immersive three dimensional asset experience to the maintenance /reliability community.

Best Practice-6: Intermediating interbot conversation

The AI bot architecture must accommodate interactions between bots. For example, a diagnostic bot specialized in isolating issues with a frac pump should be able to interact with a cementing truck diagnostic bot as both assets are related in the real world upstream process.

Best Practice-7: Context sensitivity

As the AI diagnostic bot interacts with a reliability engineer for upstream assets, it needs to maintain the context state in which the interactions occur. The context state could be driven by the well where the operation is taking place, the actual asset ID being diagnosed and the relationship this asset has to ambient context and the operator running the asset. This ensures the diagnosis is related to engineering efficiency or operator handling. Closing thoughts

As more and more of the experienced work force leaves the industry, it's necessary to digitally codify trouble shooting best practices from years of experience in solving high value assets failures. It is also important to bring down operational costs associated with remote trouble shooting. Emily Greene famously said “The future will be determined in part by happenings that it is impossible to foresee; it will also be influenced by trends that are now existent and observable.” We at Flutura believe that AI bots combined with AR/VR are the future of industrial operations and we are ready to execute with that in mind.

Closing thoughts

As more and more of the experienced work force leaves the industry, it's necessary to digitally codify trouble shooting best practices from years of experience in solving high value assets failures. It is also important to bring down operational costs associated with remote trouble shooting. Emily Greene famously said “The future will be determined in part by happenings that it is impossible to foresee; it will also be influenced by trends that are now existent and observable.” We at Flutura believe that AI bots combined with AR/VR are the future of industrial operations and we are ready to execute with that in mind.

Derick JoseDerick Jose- Co-Founder and Chief Data Scientist, Flutura Decision Sciences and Analytics
Derick is the Co-Founder & Chief Data Scientist at Flutura and has been in the analytics space for close to 3 decades. Derick oversaw the evolution of data science and is one of its chief architects. Derick career has brought him into many organizations, helping them define the vision for their data monetization programs. Derick is currently developing game changing data products in the industrial IoT space to support disruptive business models for the energy and engineering industry.
Prior to founding Flutura, Derick was Vice President, Knowledge Services at Mindtree and was the part of an elite team that architected the world's largest citizen biometric and demographic data infrastructure.

Close

Marco Polo and seven monetizable IoT intelligence use cases

OEMs, deep learning & IoT powering new biz models

Three practical applications of deep learning and IoT in oil and gas

Marco Polo and seven monetizable IoT intelligence use cases

In the 13th century, Marco Polo set out with his father and uncle on a great voyage across uncharted territories. They traveled across the vast continent of Asia and became the first Europeans to visit the Chinese capital. For 17 years, Marco Polo explored many parts of world before finally returning to Venice. He later wrote about and mapped out his experiences, inspiring a host of new adventurers and explorers to travel to the exotic lands of the East.

We are all on a voyage similar to Marco Polo’s, navigating the uncharted ocean of IoT big data — seeking those elusive use cases. As we navigate this complex ocean of industrial IoT data, we need two things:

  1. Maps (industry-specific use cases)
  2. Meta patterns (common across industries)

These would help other “Data Marco Polos” avoid the potential minefields we have encountered.

We have abstracted and distilled common big data use cases in industrial IoT that pass the business case test. These are based on real-world projects executed across energy and heavy engineering industries in the U.S. and Japanese markets. Here are the seven core IoT big data use cases that we mapped out:


1. Creating new IoT business models
We worked with a customer that used our IIoT big data technology to restructure the pricing model of field assets based on ultra-specific usage behavior. Before adopting the IIoT analytics product, the customer had a uniform price point for each asset. Deploying the IoT analytics technology helped them transition from a uniform pricing model to executing usage-based dynamic pricing that resulted in improved profitability.

2. Minimize defects in connected plants
The client was a process manufacturing plant located in the Midwest, manufacturing electrical safety products. The quality of its electrical safety product could mean life or death for folks working in the power grid. This customer had sufficiently digitized the manufacturing process to get a continuous real-time stream of humidity, fluid viscosity and ambient temperature conditions. We used this new, rich sensor data pool to identify drivers of defect density and minimize them.

3. Data-driven field recalibration
Many assets come with default factory settings which are not recalibrated resulting in suboptimal performance. We worked with an industrial giant charged with shipping a crucial engineering asset to stabilize the power grid. These assets were constantly inserted into the network ecosystem with default parameter settings. One powerful question we asked was, “Which specific parameter settings discriminate the failed assets from the assets performing well?” Discriminant analysis revealed the parameter settings that needed to be recalibrated along with the optimal band setting. By putting this simple intervention in place, we were able to dramatically impact the number of failure events in the system.

4. Real-time visual intelligence
This is probably the most widely adopted use case, where the platform answers the simple question of “How are my assets doing right now?” This could be transformers in a power grid, oil field assets in a digital oil field context or boilers deployed in the connected plants context. The ability to have real-time “eyes” on industrial field assets streaming in timely state information is crucial. The reduced latency combined with the visual processing of out-of-condition events using geospatial and time-series constructs can be liberating for hardcore engineering industries not used to experiencing the power of real-time field intelligence.

5. Optimizing energy and fuel consumption
For many moving assets like aircraft, fleet trucks and ships, fuel cost is a significant line item in operations. Cost sensor data mashed with location data collected from mobile assets can help optimize fuel efficiency. We worked with a major fleet owner to reduce fuel consumption by 2%, which led to millions of dollars being shaved off the company’s operational expenses. The customer was able to reallocate the funds to a major project it had been putting off due to budget constraints.

6. Asset forensics
As assets become increasingly digitized, businesses can get a granular, 360-degree view of their health spanning sensor data pools, ambient conditions, maintenance events and connected assets. One can confirm an asset failure hypothesis and detect correlations from these new rich data pools. This would be much richer intelligence than the current existing processes would provide today to diagnose asset health.

7. Predicting failure
Once there is a critical mass of signals, multivariate models can be built for scoring an asset on failure probability. Once this predictive failure probability crosses a certain threshold, it can automatically trigger a proactive ticket in the maintenance system (like Maximo or other systems) for an intervention, such as replacing a part, recalibration of a machine or an examination of a machine for closer inspection. Many companies are looking towards predictive maintenance models versus time-series-based maintenance programs to be more efficient in their operations. We have a customer that was able to restructure its entire maintenance program based around real-time streaming signals from its machines. This company has been able to provide a more efficient maintenance program for its customers based on the actual performance of the equipment.

As Marcel Proust said, “The voyage of discovery is not in seeking new landscapes, but in having new eyes.”

Good luck with your IoT big data voyage!

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Close

OEMs, deep learning & IoT powering new biz models

Original equipment manufacturers (OEMs) are increasingly turning to predictable, recurrent digital services to reveal new revenue streams. Let’s take a look at three business models and how digital learning can help reduce time to solve issues and while increasing revenue.

Model 1: Remote Diagnostics as a Service

I have worked with a leading oil and gas OEM in Houston that had a vision flutura to create new digital revenue streams. The first service that resonated with the oil-field services companies operating the assets was monitoring. They wanted to reduce non-productive downtime. With the increasing footprint of sensors, this model is being extended to additional upstream assets like downhole drill bits, fracking pumps, top drives, and rod pumps.

Model 2: Performance Benchmarking as a Service

I have worked with an OEM that benchmarked the health of the assets deployed, and depending upon their condition, offered additional value-added services, such as finding a buyer for assets past their performance. Performance Benchamarking as a Service is still at the infant stage and we expect this trend to rapidly accelerate in the coming years.

OEM Digital Business Models

Model 3: Extreme Pricing Personalization

In the automotive industry, Progressive Insurance created an offering around bartering machine data (mileage, braking, turns, acceleration) from cars. This data is used as an example for driving habits and was provided in return for discounted insurance prices. Progressive agreed to install a device in the car that would tap into the machine data generated, which would then infer driving habits. The data was used to create risk profiles that informed pricing models unique to the individual, as opposed to being a part of a generic segment.
These examples illustrate how the additional information gleaned from deep learning provides new perspectives on myriad situations, offering new levels of business insights. Deep learning is the new frontier for business to truly begin understanding how to mitigate risk and find new pools of revenue.

by  Derick Jose, Flutura co-founder and chief data scientist. 

Close

Three practical applications of deep learning and IoT in oil and gas

Three practical applications of deep learning and IoT in oil and gas

Deep learning and IoT are two game-changing technologies that have the potential to revolutionize the stakes for oil and gas companies facing profitmaking pressure in the face of the dramatic drop in price of oil. In this blog, based on Flutura’s extensive experience in the oil and gas industry, we have highlighted three practical use cases, from the trenches, where these technologies are practically applied to solve real-life problems and impact meaningful business outcomes.

1. Deep learning algorithms detect risks in oil pipelines

In our first use case, we take a look at how algorithms can reveal patterns and information not easily seen in other ways. For instance, drones are increasingly being used for pipeline inspections. As these drones fly through a pipeline, they record an enormous amount of video footage. It’s very difficult for a human being to detect risks such as leaks and cracks in a pipeline. Deep learning algorithms can automatically detect pixel signatures from drone footage for cracks and leaks that humans can miss, thereby minimizing infrastructure risk.

2. Deep learning algorithms detect asset behavior anomalies

While working with several oil and gas companies, we were able to collect a great deal of data from sensors strapped onto upstream assets like frack pumps and rod pumps. Looking for anomalies in high-velocity time-series parameters is like looking for a needle in a haystack for mere mortals. Deep learning algorithms can “see” anomalies that traditional rule-based electronic condition monitoring systems miss and can alert rig operations command centers.

3. Rig diagnostic bots

While providing remote diagnostic services to industrial assets, the conventional form of interaction is through traditional dashboard communications. With the advent of natural language processing algorithms powered by deep learning, field technicians can interact with the asset diagnostic applications through voice interactions just as bots help in customer service.

Concluding thoughts

The advent of deep learning and IoT has brought about great strides in learning, such as predicting and determining attributes, including insights on anomalism, digital signatures, and acoustic changes and patterns. Being able to see beyond what can be seen provides the potential, as is illustrated in our uses cases, to head off potential problems and structural failings, saving organizations time and money and keeping all that benefit from their services safer. We envision a future where the twin digital capabilities of deep learning and IoT will differentiate the winners from the laggards in the competitive energy marketplace — and the first steps are being taken right now.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Close

Flarrio

Flarrio

Pavel Romashkin, Volitant AI

The future state of the AI technology in IoT is complete, efficient automation. AI will learn to use industrial equipment better than humans and, as a result, replace human operators.

Derick Jose, Flutura

Industrial companies in energy & engineering sector are trying to find practical applications on the ground for AI & IoT which are impacting a measurable financial outcome and running early adopter pilots

Connell McGill, Enertiv

The future of AI in IoT is a world filled with so much data that we can know exactly what is going on everywhere at once and optimize the general state of things. Kind of like a human nervous system, but for the entire planet.

Sastry Malladi, FogHorn

AI is rapidly penetrating Edge Computing, particularly in IIoT. Analytics and Machine Learning is already prevalent in Edge devices and the next logical step is AI to further optimize processes.

Nelson Chu, Parametric

Soon, AI will automate routines for IoT. For example, when you turn on your lights and TV together, it will create scenes for automation. Additionally, AI could use sensors to generate shopping lists.

Rich Rogers, Hitachi

In the last century, electricity fueled the industrial revolution, giving us powerful factories and machines. Today, IIoT and AI software are bringing them to life in new and unexpected ways.

Mahi de Silva, Botworx.ai

AI is already being integrated into IoT and even IIoT, where the machines and products are able to diagnose themselves and interact with their human operators.

Jeremy Pola, Novecom

The future of IoT and IIoT is in manufacturing user experiences that deliver advanced analytics and data visualisation. This will be achieved through collaboration of computer science and data science.

Nenad Cuk, CroatiaTech.com

I see AI systems in the near future controlling, navigating and maintaining IoT devices and products. One category in particular, drones and how they are managed. With thousands of drones in the sky, AI will need to carry the weight and manage these systems on a grand scale.

Close

Close

Close

Five Trends That Will Impact The Energy Industry

Five Trends That Will Impact The Energy Industry

Five Trends That Will Impact The Energy Industry

Theses top five trends in 2018 will positively impact the oil and gas industry. (Source: Flutura)

As energy processes and industrial assets become digitized, they climb on an exponential growth curve instead of a linear growth trajectory. This digital transition is ripe with many possibilities, whether it is in artificial intelligence (AI), remote diagnostics using digital twins or next-generation usage-based operating models powered by sensor data. Oil and gas companies need to prepare for five trends.

Trend 1: Reimagine industrial AI-powered operating models

Most industrial AI applications are geared toward providing operational efficiency impacting the cost side of the balance sheet such as increased uptime and well yields and reduced HSE risks. For example, Flutura is powering a “digital prognostics as a service” model for a major upstream company where instead of reacting to asset downtimes, the company can proactively complete remote diagnostics and in-person interventions based on fault mode predictions from an AI model that is watching real-time equipment sensor streams.

Innovative business models will transform the market landscape for drilling service providers, equipment manufacturers and owner operators. Winners and losers will be decided by the ability of these traditional industrial sectors to deeply embed AI into core equipment and processes. This requires that many entrenched players reimagine their business operating models.

Trend 2: Upstream AI impacting well and equipment outcomes

AI platforms in 2017 were generic and untuned to the nuances of oil and gas. There has been a great deal of momentum in upstream areas. For example, Flutura’s Cerebra industrial AI application center has preconfigured solvers for ultraspecific upstream problems such as deepwater asset diagnostics, hydraulic fracturing, LNG and more. Expect to see more AI apps this year that will impact measurable outcomes using algorithms highly specialized to solve high-impact problems.

“Vanilla” data science will not suffice to solve mission critical problems in the oil and gas industry. As deep-learning algorithms become democratized, the importance of novel AI applications that solve a specific and complicated problem will increase. These applications will become more important than a horizontal AI platform, which requires immense tuning for the industry context.

Trend 3: Innovations in industrial sensors to see blind spots

A primary challenge in the practical execution of AI projects are blind spots in vital signals. For example, an upstream company realized through its work with Flutura that while its rotary assets had sufficient instrumentation (e.g., lube oil pressure and temperature, rpm, torque, etc.), there were critical blind spots when it came to vibration sensors and shock sensors that were a crucial signal for the deep-learning algorithm to spot anomalies leading to failure. Some specific blind spots where significant sensor innovation will be seen this year include the detection of fluid and gas quality using optics based on differential interferometry, tampering of oil containers, emissions and noise anomalies in close proximity to rotating assets.

Making assets and process context aware requires increasing the asset sensitivity to events both within and around them. Model quality is directly correlated to the quality of sensor streams. The better the sensors get, the better the AI models become.

Trend 4: Edge intelligence

There are two types of intelligence: informational and actionable. For example, if a leased asset in an assetas- a-service offering is repeatedly being misused by a worker, edge intelligence will notify the supervisor to intervene. This decision-making loop cannot afford the time needed to ship massive sensor event data over the network and then wait for the AI layer at the center to respond. Localized sense and respond layers are needed to be operationally effective.

Edge intelligence is ideal for “fail operational” behaviors where an asset or process can complete its core operation even when a part of it fails. Edge intelligence also is ideal when reliability and latency are important. Large oil and gas projects have thousands of sensor events streaming across myriad wells with some decisions needing to be reliably made within milliseconds.

Trend 5: Sensor data highways

Today’s data networks are insufficient to keep up with the high rates of data transmission required by rising sensor density on upstream processes and assets combined with increased frequency of transmission. Companies like Sigfox and Ingenu are focused on building dedicated nextgeneration sensor data transmission infrastructures for moving sensor data. It will be like getting a dedicated lane on national highways where sensor data streams can move data that support machine-critical upstream processes and equipment.

Close

Artificial intelligence, real quality control

Artificial intelligence, real quality control

What do process chemical manufacturing and cooking have in common?

smart industry iot iiot industrial internet of things digital transformation

Flutura's Derick Jose

Both have recipes—cookbooks and standard-operating-procedures that serve as recipes for process-chemical manufacturing. Both need quality inputs. Both need dynamic control as the process unfolds—adding the right amount of pepper or calibrating temperature, for example. Both need feedback signals—a chef sampling his dish midway or quality signals in process chemicals.

The problem facing the chemical-manufacturing industry is that, while there are standard-operating-procedures, the do not take into account the dynamic conditions in which actual manufacturing processes happen. For example, the mixer’s vessels would have been used, leaving residuals; the ambient temperature may have moisture or dust that influences product quality.

As a manufacturer there are specific blind spots:

 What influence do each of these factors play in changing product quality outcomes? (Which factors are noise and which factors are signals?)
  •  What is the rank of each influencer variable? (Some variables may have disproportionately more influence on quality outcomes than others.)
  •  What is the expected quality outcome based on current conditions and what would be the next best frontline action to take in order to reduce wasteful production?
I can illustrate this with a real-world story. We recently executed a plan for an industrial-glue manufacturer and scaled it across multiple production lines across countries. The problem: the customer was facing a massive challenge; wasted production cost them hundreds of millions of dollars because of the stringent quality controls in the industry. They did not have the tools to pinpoint what influenced the quality outcome.

The solution: we built surgical AI apps to process multiple signals as input lab-quality signals, sensor anomalies, process signals and ambient condition data to predict quality of current production and correlations between various parameters and quality outcomes.

The best aspect of the process was that we closed the decision loop with the frontline folks by translating complex statistical signals into a simple quality “smiley” that indicates if all is going well. When the smiley account changed, production was shut down and forensics initiated to nail the specific parameter that caused quality deviations.

The learnings? If you are in process-chemical manufacturing and want to stay competitive, consider embedding AI into your frontline-manufacturing actions to boost quality outcomes. And before getting started, ask yourself:

  •         Which product lines experience the highest quality rejection rates? Can we isolate the top three product lines?
  •         What is the economic impact of wasted quality? A best-case estimate? A realistic estimate?
  •         If the quality of product is enhanced by 3-5 percent, how much economic value would it unlock in the first year, second year and third year?
  •         What data pools exist? What about sensor data, lab data, SCADA/PLC data, maintenance ticket data, operator data?
  •         Which OT/IT systems hold this event data?
  •         Who can be the executive champion who can shepherd the project?
  •         What if initial results from the AI processes can be consumed in 90 days?

I believe that the process-manufacturing industry has to view industrial AI as a massive shift, not a temporary phenomenon. Rather than being paralyzed by threats, embracing industrial AI will boost efficiency.

The risk of digital inaction is greater than the risk of no returns.

Derick Jose is co-founder and chief data scientist at Flutura Decision Sciences and Analytics.

Close

Winning the industrial AI game: Why labeled failure data, not algorithms, is key

Winning the industrial AI game: Why labeled failure data, not algorithms, is key

Artificial intelligence is slowly but steadily embedding itself into the core processes of multiple industries and changing the industrial landscape in so many ways — be it deep learning-powered autonomous cars or bot-powered medical diagnostic processes. The industrial and energy sectors are not immune to the disruption that comes with embracing AI. As upstream and downstream companies gear up for AI, there is one important lesson I want to share that might seem counterintuitive. For the successful execution of an AI project, the data matters more than the algorithm. Seems odd, right?

Let me start by sharing a recent experience. Flutura was working with a leading heavy equipment manufacturer based in Houston that has numerous industrial assets deployed on rigs globally. These rotary assets were quite densely instrumented; they have great digital fabric consisting of pressure sensors, flow meters, temperature sensors and rpm sensors all continuously streaming data to a centralized data lake. The problem the manufacturer was trying to solve was how to “see” typically unseen early warning signals of failure modes in order to reduce multimillion-dollar downtimes.

In order to do this, every time a piece of upstream equipment went down, we needed to label the reason why it went down. It might have been motor overheating, bearing failures or low lube oil pressure, but until we know the specific reason why equipment goes down, it’s difficult to extract the sequence of anomalies leading to the failure modes. While this company had a massive sensor data lake, running into terabytes, the information was useless until the failure labels were embedded within the assets’ timeline. In order to tag all “failure mode” label blind spots, we configured an app that helped institutionalize this process. Every time a maintenance ticket was generated for unplanned equipment downtime, the app would step through a workflow at the end of which the failure mode for the asset was tagged onto the timeline.

So, here are three questions to ask your team before you embark on an AI project:

  1. Top three failures: Which are the top three high-value failure modes that are most economically significant?
    Rationale: All failure modes are not the same. Isolating and prioritizing the vital few failure modes from the significant many saves money.
  2. Tagging process: When equipment goes down, is the failure mode automatically generated by the asset or does it need a “human in the loop” to tag failures?
    Rationale: Some machines are programmed to record the failure mode event as a historian tag, others need an external process.
  3. Breadth and depth: What is the breadth and depth of equipment data available in the data lake?
    Rationale: In order to model the entire set of data, one needs to have maintenance tickets, sensor streams and ambient context. In order to “see” sufficient instances of a failure, the sensor data lake needs to have at least one to two years of operational data.

To conclude, it’s easy to get carried away by the hype surrounding AI and algorithms. But the key to winning the game is finding the answer to the above three data-tagging questions. Good luck as you introduce AI to unlock gold in your data.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Close

Practical AI Lessons Learned

Practical AI Lessons Learned

An operator digitized sensor data and integrated physics-based models with statistical data-driven models to predict risk of failure. (Source: Flutura Decision Sciences and Analytics)

A confluence of groundbreaking technologies bundled with next-generation business models is poised to transform the oil and gas industry. It’s history in the making. This convergence of digital technologies (the Industrial Internet of Things, artificial intelligence [AI], autonomous self-healing assets, drones, etc.) is creating entirely new ways of operating a producing well and massively transforming outcomes like increasing production and decreasing nonproductive time (NPT). It’s interesting to view some real-world examples of transformations that are solving real-world problems, and the takeaways are five lessons learned in the execution process.

Predicting fracture pump failures

Flutura worked with one of the world’s largest original equipment manufacturers (OEMs) of fracture pumps. Fracture pumps are used in harsh conditions, and as a manufacturer the drilling service providers and owner/ operators expect the OEMs to have an intimate understanding of the current health of the fracture pump and the potential ways it could succumb to a fault mode. To make this transition from the electromechanical world to the digital world, the customer created a digital twin of the fracture pump, including its various sub-systems (pumps, engine, transmission, etc.), sensor signals (engine rpm, transmission oil pressure), trips, alarms and fault modes on Cerebra. Once the digital twin was created, a “digital umbilical cord” was created using Cerebra’s algorithmic state assessment module providing remote digital diagnostics for the pump and predicting potential failure modes with associated confidence for the field force to automatically create tickets. This, in addition to reducing downtime of nodal assets in the field, created a new predictable recurrent revenue pool for the customer through its “digital health monitoring as a service” offering.

A digital twin reduced downtime of nodal assets in the field and created a predictable recurrent revenue pool for the customer. (Source: Flutura Decision Sciences and Analytics)

AI in FLNG carriers

A major global LNG carrier approached Flutura with an operational problem to solve. Floating LNG carriers are used to ship LNG from point to point. This is a complex and delicate process since gas is stored at -162 C (-260 F) for ease of transport, which takes up about 1/600th the volume of natural gas in its gaseous state. There is a lot of cryogenic and leakage risk associated with this process. The global carrier wanted an “edge solution” completely self-contained on the ship to diagnose and predict risky outcomes. It created a digital twin of the LNG carrier using Cerebra modules, and the solution’s advanced deep-learning neural networks detected temperature and leakage anomalies that human eyes could not detect in an unsupervised fashion.

An LNG carrier created a digital twin to detect temperature and leakage anomalies. (Source: Flutura Decision Sciences and Analytics)

AI in subsea separators

Subsea separators increase the oil recovery rate by separating a well stream into gaseous and liquid components. As oil production and recovery rates are directly correlated to separator performance and health, their importance to monitor the health in real time and proactively predict potential failure modes enter the radar. This industrial OEM was trying to solve three problems using digital platforms:

  • Remotely diagnosing the digital health of subsea separators;
  • Reducing NPT by having a prognosis for the failure modes; and
  • Reducing high operational costs associated with expensive unwanted trips to the rig.

The operator digitized sensor data from inlet pressure, choke pressure, flow rates (gas, oil and water) and differential pressures/fluid levels. It also integrated its physics-based models with statistical data-driven models to predict risk of failure.

What has been learned?

Mindsets eclipse toolsets. The race to digital transformation in operating wells is not just about digital toolsets. It’s about changing mindsets in ways that are new. The veterans of the oil and gas industry have become accustomed to tangibile and reliabile outcomes. Digital is intangible and iterative as the AI algorithms learn and adapt. This requires executives to think about operations differently and reimagine the way they view upstream operations.

Converting outputs to outcomes. Digital involves executing a great deal of physics and math-based models on sensor streams. These digital outputs then need to be translated into a meaningful operational outcome, like increasing first-time resolution of upstream assets and reducing NPT, which are then mapped to dollars realized. Data need to be converted to dollars.

Sensor lakes. One of the foundational pieces for digital transformation is having a critical mass of labeled fault mode data. This creates a trail of “digital bread crumbs” and leaves a marker on the asset time line, indicating how the machine specifically failed. This information is guzzled by the deep neural network to discover weights to be attached to minimize prediction deviation. Examples of labeled fault data include electric motor failure, hydraulic leakage and stick-shift events.

Intelligent industrial diagnostic “bot” assistance. As the experienced workforce retires, it is important to codify that knowledge for the future workforce. Industrial bot assistants can codify frontline experience and head office intelligence into a comprehensive diagnostic template and make it accessible via “don’t make me think” voice interaction instead of complex dashboard interaction. For example, Flutura created an “Ask Cerebra” diagnostic bot for catwalks that helped a large OEM frontline team step through a diagnostic workflow to understand fault modes codified from years of experience. With the advent of natural language processing algorithms powered by deep learning, field technicians can interact with the asset diagnostic applications through voice interactions, just as bots help in customer service.

Integrate heart and mind. Digital transformation is a complex process requiring tact in dealing with sensitive human issues in a complex ecosystem. This has required seasoned leadership that can understand the transformative potential of digital technologies but can also provide a human-centric approach to solving problems.

These are deep digital shifts that have reached an inflection point, creating a massive transformation of oil and gas operations to move beyond “vanilla” condition-monitoring systems. The challenges are more human than technological. They require oil and gas leadership to rethink operating models, business models and economic models. And this requires them to create a blueprint to respond to these tectonic irreversible shifts and to think that the status quo is not an option as the digital wave seeps into the electromechanical world.

Close

Scaling IIoT successes

Scaling IIoT successes

During a panel at the SIIA Propelling IoT: Emerging IoT Business Opportunities event in Houston, TX, I shared examples of how the IIoT affects business outcomes in the industrial sector. Anyone who keeps up with the IIoT knows it is often a situation of crawl, walk, run before you start seeing real ROI. But once you are running, it is simple to duplicate that success, whether you are reducing operational cost, increasing yield and quality of products, reducing downtime or improving safety.

The level everyone is trying to reach is prognostics—what are the next best actions to undertake? The people in the field don’t have time to think about this stuff. They just need to know what is failing or when it is going to fail or what maintenance do they need to do next.

Prognostics is key in answering these questions—connecting live machine data with tagged events, labeled data and maintenance data.

With the right data sets, a company can move toward algorithmic spare-part refurbishment while generating mechanical repair or work orders with specific instructions on the repair, all while publishing a maintenance-repair video on a working 3D digital twin. There is a lot of opportunity for oil and gas companies to provide these types of Industrial IoT solutions in their operations, and some of the high-value equipment they can use includes hydraulic fracking units, cat walks, top drives, mud pumps, etc. For petrochemical plants and refineries, some of the equipment includes cracking units, reactors, valves and pumps.

Using artificial intelligence, companies can start turning the ship around on their operations—moving from a reactive mode to a predictive/preventative modes, which enable companies to unlock revenue streams buried in the data.   

I recommend having a workshop with the right decision makers and stakeholders within an organization, equipped with a budget and IIoT-value KPIs. Find a business problem to solve first, versus trying to sell a solution that may not fit the business case. Do your homework on the target market, target customer and know your customer and the business problems they are trying to solve. Then bring in subject-matter experts that know the process, the equipment and operations very well.

Rick Harlow Headshot
 

- Rick Harlow ( Executive vice president of Americas at Flutura Decision Sciences and Analytics)

Close

How IIoT platforms with AR/VR help OEMs reduce operating costs

How IIoT platforms with AR/VR help OEMs reduce operating costs

Kurt from our Houston office recently visited an upstream operation in Eagle Pass, Texas. At this operation, there was a variety of mission-critical equipment operating and collecting crucial production data points. It took Kurt a good six hours to get the facility. It was time-consuming, tedious and it cost money. We were asking ourselves a simple question: “How can we reduce Kurt’s visits to Eagle Pass by combining the 3D immersive experience of a virtual reality (VR) tool with the deep advanced analytical capabilities of an IIoT platform?” That question led to the development of augmented reality (AR)/VR apps that gracefully compliment an IIoT system.

Take, for example, a pump or a motor that commonly powers upstream operations. Our IIoT platform’s anomaly detection algorithms flag and mark cases of motor temperature overheating. These anomaly markers are laid out on a 3D model of the asset and reliability engineers, one sitting in Houston and another sitting in Oslo, can experience the unhealthy motor from the comfort of their headquarters. The sensor streams from the motor are streamed from historian tags in real time to the IIoT platform. The IIoT platform is then integrated with the AR/VR app, which enables the engineers to perform multiple asset examination operations. They can get “exploded” and “zoomed in” views of the asset and can rotate the asset across the 3D axis to pinpoint what is going wrong and where it’s going wrong.

In addition to experiencing the asset the reliability, engineers at headquarters can use voice and hand-based gestures to understand the sequence of events leading up to a high-value failure mode.

These features are extremely useful for optimizing upstream operations, reducing trips and shaving off costs in a hyper-competitive marketplace. As Harvey Firestone said, “Capital isn’t so important in business. Experience isn’t so important. You can get both these things. What is important is ideas. If you have ideas, you have the main asset you need, and there isn’t any limit to what you can do with your business and your life.” These new ideas promise to change the way OEM and operators run their upstream operations.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Close

What can healthcare teach industrial folks?

What can healthcare teach industrial folks?

Submitted by Derick Jose on Mon, 08/07/2017 - 11:02

The industrial world (oil and gas, utilities, refineries, discrete/continuous manufacturing) is in the early stages of radical transformation powered by artificial intelligence (AI). AI promises to bring unprecedented efficiencies and competitive advantages by radically transforming. business models. As the race for AI-powered transformation accelerates in the coming years, it would be wise to learn from experiences of the healthcare industry, which experienced a similar trajectory.

Example 1: Anomaly detection in sensor streams

Patients suspected of having arrhythmia will often get an electrocardiogram (ECG) in a doctor's office. However, if an in-office ECG does detect a problem, the doctor prescribes to the patient a wearable ECG that monitors the heart continuously for two weeks. The resulting heartbeat data is then forensically examined (second by second) for any indications of problematic arrhythmias, some of which are extremely difficult to differentiate from harmless heartbeat irregularities. AI algorithms powered by deep-learning techniques can detect 13 types of arrhythmia from ECG signals, helping doctors detect and treat heart problems and extend human life.

How is this applied to the industrial sector? We have been working to detect unusual “electro-mechanical” rhythms in sensor data from variety of upstream assets (like frac pumps sectors), thereby diagnosing the presence of specific fault modes such as impending lube-oil issues or gearbox issues, and extending asset life

Example 2: Diagnostic image detection

Diabetic retinopathy (DR) is the fastest growing cause of blindness and more than 415 million diabetic patients are at risk worldwide. If caught early, the disease can be treated; if not, it can lead to irreversible blindness. Unfortunately, medical specialists capable of detecting the disease are not available in many parts of the world where diabetes is graphic for blog postprevalent. Deep-learning algorithms examine pictures of the back of the eye and rate them for disease presence and severity. Severity is determined by the type of lesions present (e.g. microaneurysms, haemorrhages, hard exudates, etc.), which are indicative of bleeding and fluid leakage in the eye.

How can this be applied to the industrial sector? We are seeing an increased adoption of drones for pipeline inspection in midstream and downstream parts of the oil and gas business. These drones generate terabytes of images scanned by AI algorithms to detect the presence of leaks and fractures. AI can definitely see things the human eye misses in images.

Just getting started? Find our IIoT Launch Template right here

Example 3: “Diagnostic bot in your pocket”

How does having a “doctor in your pocket” feel? That’s precisely what healthcare start-ups are doing by cutting down on unnecessary consultations and developing AI that can engage patients just like a physician. The use of bots by healthcare AI companies (HealthVault, Babylon Health and Medwhat) to diagnose health conditions is increasingly popular.

How is this applied to the industrial sector? We are developing AI bots that can run diagnostic tests on sensor data and highlight the presence of poor quality lube oil, asset misuse on rigs, vibration anomalies and more. These insights can reduce logistic costs associated with troubleshooting remote assets in an industry where the price of a barrel dictates which companies survive and which go out of business.

The healthcare industry is a front-runner in applying AI to mission-critical tasks, whether they be image-detection, anomaly-detection, sequence-detection or bots to automate diagnosis. Industrial AI practitioners can learn a great deal from the healthcare successes and failures, and apply those learnings in the industrial sector.

Jack Welch aptly said, “If the rate of change on the outside exceeds the rate of change on the inside, the end is near.” It’s important for industrial companies to ask the question “Is the rate of change outside greater than that inside?”

Derick Jose is co-founder and chief data scientist at Flutura Decision Sciences and Analytics.

Close

Deploying an AI-driven IoT platform: 21 questions to ask

Deploying an AI-driven IoT platform: 21 questions to ask

My experience with Fortune 100 global energy, engineering, and OEM companies, tells me that a tectonic shift is happening in the energy industry, a shift that promises to change the game in the marketplace forever, leaving the traditional asset and Capex-based business models behind. Increasingly, we are seeing that AI-driven IoT platforms are becoming the digital nervous systems of 21st century industrial companies. IoT platforms are going to be the foundation on which new business models are going to be created — powering new revenue pools and expanding the engineering organization’s foray into other value-added services that bring predictable revenue streams. As a result, the choice of an AI-driven IoT platform is an extremely strategic one which cannot be reversed easily.

As the engineering world collides with the digital world, there is a great deal of confusion and our team felt that more than finding answers, the right questions needed to be asked. Having been soaked in the AI and industrial IoT world, we would like to share a list of 21 mutually exclusive and collectively exhaustive questions spanning core dimensions in applying AI to industrial context.

Instrumenting asset blind spots 

In order to assess the scope of the work, one of the initial tasks at hand is to figure out the “machine learnability” quotient of the asset. Most electromechanical assets have rudimentary instrumentation and may not have the sensors required to capture information in order to model the asset. In order to get context of the remote asset, here are a few questions that reveal the instrumentation and asset landscape:

1.What events are being emitted by the asset today?
2.What events are not being broadcasted by the asset that need to be instrumented or “sensor enabled” going forward for the AI algorithm to learn from?

Sensor health monitoring

One of the most common issues faced in the rugged industrial context is the malfunctioning of sensors which can result in corrupt data being fed to the AI algorithms. As there are hundreds and thousands of assets and sensors, it is very important to know what percentage of the assets and sensors are transmitting healthy sensor data. Basically, we need to look for the absence of events from assets of interest. For example, some sensors had battery issues and were not transmitting:

3.Does the AI-driven IoT platform have dashboards that reveal the number of sensors not broadcasting state information?
4.Do the sensor health monitoring dashboards reveal the length of time that an asset has not been communicating?
5.Does the sensor health monitoring dashboard flag events with spurious data or incorrect data?

AI-driven signal detection

AI is where deep mathematics meets machines; AI and deep learning algorithms crawling in search of patterns to predict asset downtime, asset failure, and asset optimization:

6.Which AI algorithms need a data scientist to configure, and which algorithms can be executed by an asset engineer?
7.Can the AI platform signal anomalies in real time?
8.Can the AI platform express the taxonomy of anomalies experienced by an asset?
9.Can the AI platform correlate the anomalies to asset outcomes (downtime, remaining useful life) that need to be modeled?
10.Can the AI platform have multiple models blended together as an ensemble?
11.Can the AI platform predict in real time or is the prediction in offline mode?

Industrial data product creation   

Industrial data products are a set of AI solvers for real-world business problems. The apps can answer a correlation question or trigger an action signal. As engineers start layering intelligence over their assets using data products, here are a few questions that can help:

12.Can the IoT platform guide users to create edge data products using APIs or using workflows?
13.Can the IoT platform create forensic data products that go beyond “dot on the map” to identify interesting correlations not ever seen before?
14.Can the IoT platform triangulate signals across heterogeneous data pools, sensor historian data streams, maintenance events, ambient asset conditions and other data streams?

Scalability of sensor event streams 

The industrial IoT world will absolutely generate many more events than the consumer world. Take for example the Bombardier C-Series jetliner with Pratt & Whitney’s engine which has 5,000 sensors embedded within it. During a 12-hour flight, 10 GB of data per second is emitted, resulting in 844 TB of data. The scale required for data ingestion is infinitely higher. With that in mind here are a few questions on scalability:

15.What is the peak emission rate of my asset events? Is it thousands per hour, millions per hour?
16.What is the peak ingestion rate of the IoT platform?
17.How much time will it take for an alarm event to reach the central command center? Is it milliseconds or seconds or minutes?

Pricing model of AI-driven IoT applications

The industry is in the early stages of its evolution and multiple pricing modelsexist. Over a period of time, depending upon the complexity of industrial process and its linkage to a financial outcome, the pricing model will eventually stabilize. In the meantime, here are a few questions to ask:

18.Should pricing be set per asset or asset type?
19.Should pricing be set per app or cluster of apps?
20.Event volume-based pricing offered by players like Splunk?
21.Outcome-based pricing like pay per thrust in aviation engines?

Closing thoughts 

With all the considerations above, the choice of an industrial AI-driven IoT platform for assets is a multidisciplinary affair requiring three lenses to look through: the financial lens, the engineering lens and the software lens. Taking the time to consider all of these variables before you begin down the AI path is critical, but can make the task a lot less risky.

Albert Einstein once said, “We cannot solve our problems with the same thinking we used when we created them.” We hope the above questions serve as an actionable AI playbook as you plan out your strategy for an industrial IoT initiative.

Close

7 Best Practices for Applying Industrial Artificial Intelligence Bots

7 Best Practices for Applying Industrial Artificial Intelligence Bots

7 Best Practices for Applying Industrial Artificial Intelligence Bots

As experienced industrial employees leave the workforce, AI bots are filling in the gaps they leave behind. Here are some best practices.

Bots and virtual diagnostic agents are increasingly entering into our daily habits and helping with intermediate tasks which were typically human driven. For example, when we shop online, a conversation bot is activated to understand our purchase intent and, depending on our inputs, a recommendation bot suggests possible items to purchase. Another example is from the health care industry – Flow Health, a start-up, has AI bots that diagnose potential health conditions before a person actually sees the doctor. The consumer

World is slowly being taken over by transaction automating bots.

At Flutura, we have been focussed on applying AI bots to industrial context – for example e have bots facilitate reliability engineering and maintenance diagnostic tasks. Based on our experiences, here are 7 key best practices for applying AI bots in a practical way.

Best Practice-1: Map the high impact tasks

How do you begin your journey to introduce AI bots? It all starts with identifying one high value task.

Which tasks are to be “botified”?

For example in upstream oil and gas, a field service engineer regularly diagnoses irregularities in pumps, motors, winch drives etc. when the arise. Which is a high value failure mode worth diagnosing with a bot instead?

What is the frequency of execution of tasks?

For example a bot may be redundant in identifying potential tasks whereas a mud pump, which is used in harsh conditions, may be down more frequently.

Best Practice-2: Target a measurable operational outcome

Flutura was working with a leading industrial chemical manufacturer where they experienced $16 million worth of reactor downtime caused by valves behaving poorly. An AI powered valve diagnostic bot now helps the company spot valves with a poor health score and recommends the next best action. This bot is expected to bring down the economic impact of downtime by 40%.

Best Practice-3: Decoding users intent from free text using classifier models

One of the primary tasks of the AI bot is to infer the user's intent. For example, based on the query text, does the company want to prioritize the alarms to respond to anomalies or should a root cause analysis of events leading to equipment's failure mode be conducted instead? The microservices decoding the user intent should be robustly tested so that the industrial engineers' experience in the interaction is optimal.

Best Practice-4: AI Bot Integration to adjacent operational systems

Flutura in building a diagnostic AI bot for rod pumps. These do not exist in isolation. The diagnositc needs to “listen” to alarms generated from electronic condition monitoring systems and other data loggers and have the ability to automatically rise a ticket alerting to the potential anomaly.

Best Practice-5: Bot integration with AR/VR applications for collaborative trouble shooting

One of the best use cases to reduce operational cost for upstream oil and gas is collaborative remote trouble shooting in an effort to reduce rig visits. For example, a maintenance engineer sitting in Houston can collaborate with a reliability specialist sitting in Norway by wearing a virtual reality headset. They can dig deep into the real-time MWD (measurement while drilling) logs of a drill bit in Saudi Arabia. This ability to globally collaborate reduces the operational cost associated with expensive rig visits and increases first time resolution of trouble tickets. In this context, Cerebra's diagnostic AI bots are integrated with remote VR/AR apps from Metaverse and provide an immersive three dimensional asset experience to the maintenance /reliability community.

Best Practice-6: Intermediating interbot conversation

The AI bot architecture must accommodate interactions between bots. For example, a diagnostic bot specialized in isolating issues with a frac pump should be able to interact with a cementing truck diagnostic bot as both assets are related in the real world upstream process.

Best Practice-7: Context sensitivity

As the AI diagnostic bot interacts with a reliability engineer for upstream assets, it needs to maintain the context state in which the interactions occur. The context state could be driven by the well where the operation is taking place, the actual asset ID being diagnosed and the relationship this asset has to ambient context and the operator running the asset. This ensures the diagnosis is related to engineering efficiency or operator handling. Closing thoughts

As more and more of the experienced work force leaves the industry, it's necessary to digitally codify trouble shooting best practices from years of experience in solving high value assets failures. It is also important to bring down operational costs associated with remote trouble shooting. Emily Greene famously said “The future will be determined in part by happenings that it is impossible to foresee; it will also be influenced by trends that are now existent and observable.” We at Flutura believe that AI bots combined with AR/VR are the future of industrial operations and we are ready to execute with that in mind.

Closing thoughts

As more and more of the experienced work force leaves the industry, it's necessary to digitally codify trouble shooting best practices from years of experience in solving high value assets failures. It is also important to bring down operational costs associated with remote trouble shooting. Emily Greene famously said “The future will be determined in part by happenings that it is impossible to foresee; it will also be influenced by trends that are now existent and observable.” We at Flutura believe that AI bots combined with AR/VR are the future of industrial operations and we are ready to execute with that in mind.

Derick JoseDerick Jose- Co-Founder and Chief Data Scientist, Flutura Decision Sciences and Analytics
Derick is the Co-Founder & Chief Data Scientist at Flutura and has been in the analytics space for close to 3 decades. Derick oversaw the evolution of data science and is one of its chief architects. Derick career has brought him into many organizations, helping them define the vision for their data monetization programs. Derick is currently developing game changing data products in the industrial IoT space to support disruptive business models for the energy and engineering industry.
Prior to founding Flutura, Derick was Vice President, Knowledge Services at Mindtree and was the part of an elite team that architected the world's largest citizen biometric and demographic data infrastructure.

Close

Marco Polo and seven monetizable IoT intelligence use cases

Marco Polo and seven monetizable IoT intelligence use cases

In the 13th century, Marco Polo set out with his father and uncle on a great voyage across uncharted territories. They traveled across the vast continent of Asia and became the first Europeans to visit the Chinese capital. For 17 years, Marco Polo explored many parts of world before finally returning to Venice. He later wrote about and mapped out his experiences, inspiring a host of new adventurers and explorers to travel to the exotic lands of the East.

We are all on a voyage similar to Marco Polo’s, navigating the uncharted ocean of IoT big data — seeking those elusive use cases. As we navigate this complex ocean of industrial IoT data, we need two things:

  1. Maps (industry-specific use cases)
  2. Meta patterns (common across industries)

These would help other “Data Marco Polos” avoid the potential minefields we have encountered.

We have abstracted and distilled common big data use cases in industrial IoT that pass the business case test. These are based on real-world projects executed across energy and heavy engineering industries in the U.S. and Japanese markets. Here are the seven core IoT big data use cases that we mapped out:


1. Creating new IoT business models
We worked with a customer that used our IIoT big data technology to restructure the pricing model of field assets based on ultra-specific usage behavior. Before adopting the IIoT analytics product, the customer had a uniform price point for each asset. Deploying the IoT analytics technology helped them transition from a uniform pricing model to executing usage-based dynamic pricing that resulted in improved profitability.

2. Minimize defects in connected plants
The client was a process manufacturing plant located in the Midwest, manufacturing electrical safety products. The quality of its electrical safety product could mean life or death for folks working in the power grid. This customer had sufficiently digitized the manufacturing process to get a continuous real-time stream of humidity, fluid viscosity and ambient temperature conditions. We used this new, rich sensor data pool to identify drivers of defect density and minimize them.

3. Data-driven field recalibration
Many assets come with default factory settings which are not recalibrated resulting in suboptimal performance. We worked with an industrial giant charged with shipping a crucial engineering asset to stabilize the power grid. These assets were constantly inserted into the network ecosystem with default parameter settings. One powerful question we asked was, “Which specific parameter settings discriminate the failed assets from the assets performing well?” Discriminant analysis revealed the parameter settings that needed to be recalibrated along with the optimal band setting. By putting this simple intervention in place, we were able to dramatically impact the number of failure events in the system.

4. Real-time visual intelligence
This is probably the most widely adopted use case, where the platform answers the simple question of “How are my assets doing right now?” This could be transformers in a power grid, oil field assets in a digital oil field context or boilers deployed in the connected plants context. The ability to have real-time “eyes” on industrial field assets streaming in timely state information is crucial. The reduced latency combined with the visual processing of out-of-condition events using geospatial and time-series constructs can be liberating for hardcore engineering industries not used to experiencing the power of real-time field intelligence.

5. Optimizing energy and fuel consumption
For many moving assets like aircraft, fleet trucks and ships, fuel cost is a significant line item in operations. Cost sensor data mashed with location data collected from mobile assets can help optimize fuel efficiency. We worked with a major fleet owner to reduce fuel consumption by 2%, which led to millions of dollars being shaved off the company’s operational expenses. The customer was able to reallocate the funds to a major project it had been putting off due to budget constraints.

6. Asset forensics
As assets become increasingly digitized, businesses can get a granular, 360-degree view of their health spanning sensor data pools, ambient conditions, maintenance events and connected assets. One can confirm an asset failure hypothesis and detect correlations from these new rich data pools. This would be much richer intelligence than the current existing processes would provide today to diagnose asset health.

7. Predicting failure
Once there is a critical mass of signals, multivariate models can be built for scoring an asset on failure probability. Once this predictive failure probability crosses a certain threshold, it can automatically trigger a proactive ticket in the maintenance system (like Maximo or other systems) for an intervention, such as replacing a part, recalibration of a machine or an examination of a machine for closer inspection. Many companies are looking towards predictive maintenance models versus time-series-based maintenance programs to be more efficient in their operations. We have a customer that was able to restructure its entire maintenance program based around real-time streaming signals from its machines. This company has been able to provide a more efficient maintenance program for its customers based on the actual performance of the equipment.

As Marcel Proust said, “The voyage of discovery is not in seeking new landscapes, but in having new eyes.”

Good luck with your IoT big data voyage!

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Close

OEMs, deep learning & IoT powering new biz models

OEMs, deep learning & IoT powering new biz models

Original equipment manufacturers (OEMs) are increasingly turning to predictable, recurrent digital services to reveal new revenue streams. Let’s take a look at three business models and how digital learning can help reduce time to solve issues and while increasing revenue.

Model 1: Remote Diagnostics as a Service

I have worked with a leading oil and gas OEM in Houston that had a vision flutura to create new digital revenue streams. The first service that resonated with the oil-field services companies operating the assets was monitoring. They wanted to reduce non-productive downtime. With the increasing footprint of sensors, this model is being extended to additional upstream assets like downhole drill bits, fracking pumps, top drives, and rod pumps.

Model 2: Performance Benchmarking as a Service

I have worked with an OEM that benchmarked the health of the assets deployed, and depending upon their condition, offered additional value-added services, such as finding a buyer for assets past their performance. Performance Benchamarking as a Service is still at the infant stage and we expect this trend to rapidly accelerate in the coming years.

OEM Digital Business Models

Model 3: Extreme Pricing Personalization

In the automotive industry, Progressive Insurance created an offering around bartering machine data (mileage, braking, turns, acceleration) from cars. This data is used as an example for driving habits and was provided in return for discounted insurance prices. Progressive agreed to install a device in the car that would tap into the machine data generated, which would then infer driving habits. The data was used to create risk profiles that informed pricing models unique to the individual, as opposed to being a part of a generic segment.
These examples illustrate how the additional information gleaned from deep learning provides new perspectives on myriad situations, offering new levels of business insights. Deep learning is the new frontier for business to truly begin understanding how to mitigate risk and find new pools of revenue.

by  Derick Jose, Flutura co-founder and chief data scientist. 

Close

Three practical applications of deep learning and IoT in oil and gas

Three practical applications of deep learning and IoT in oil and gas

Three practical applications of deep learning and IoT in oil and gas

Deep learning and IoT are two game-changing technologies that have the potential to revolutionize the stakes for oil and gas companies facing profitmaking pressure in the face of the dramatic drop in price of oil. In this blog, based on Flutura’s extensive experience in the oil and gas industry, we have highlighted three practical use cases, from the trenches, where these technologies are practically applied to solve real-life problems and impact meaningful business outcomes.

1. Deep learning algorithms detect risks in oil pipelines

In our first use case, we take a look at how algorithms can reveal patterns and information not easily seen in other ways. For instance, drones are increasingly being used for pipeline inspections. As these drones fly through a pipeline, they record an enormous amount of video footage. It’s very difficult for a human being to detect risks such as leaks and cracks in a pipeline. Deep learning algorithms can automatically detect pixel signatures from drone footage for cracks and leaks that humans can miss, thereby minimizing infrastructure risk.

2. Deep learning algorithms detect asset behavior anomalies

While working with several oil and gas companies, we were able to collect a great deal of data from sensors strapped onto upstream assets like frack pumps and rod pumps. Looking for anomalies in high-velocity time-series parameters is like looking for a needle in a haystack for mere mortals. Deep learning algorithms can “see” anomalies that traditional rule-based electronic condition monitoring systems miss and can alert rig operations command centers.

3. Rig diagnostic bots

While providing remote diagnostic services to industrial assets, the conventional form of interaction is through traditional dashboard communications. With the advent of natural language processing algorithms powered by deep learning, field technicians can interact with the asset diagnostic applications through voice interactions just as bots help in customer service.

Concluding thoughts

The advent of deep learning and IoT has brought about great strides in learning, such as predicting and determining attributes, including insights on anomalism, digital signatures, and acoustic changes and patterns. Being able to see beyond what can be seen provides the potential, as is illustrated in our uses cases, to head off potential problems and structural failings, saving organizations time and money and keeping all that benefit from their services safer. We envision a future where the twin digital capabilities of deep learning and IoT will differentiate the winners from the laggards in the competitive energy marketplace — and the first steps are being taken right now.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Close

Flarrio

Flarrio

Pavel Romashkin, Volitant AI

The future state of the AI technology in IoT is complete, efficient automation. AI will learn to use industrial equipment better than humans and, as a result, replace human operators.

Derick Jose, Flutura

Industrial companies in energy & engineering sector are trying to find practical applications on the ground for AI & IoT which are impacting a measurable financial outcome and running early adopter pilots

Connell McGill, Enertiv

The future of AI in IoT is a world filled with so much data that we can know exactly what is going on everywhere at once and optimize the general state of things. Kind of like a human nervous system, but for the entire planet.

Sastry Malladi, FogHorn

AI is rapidly penetrating Edge Computing, particularly in IIoT. Analytics and Machine Learning is already prevalent in Edge devices and the next logical step is AI to further optimize processes.

Nelson Chu, Parametric

Soon, AI will automate routines for IoT. For example, when you turn on your lights and TV together, it will create scenes for automation. Additionally, AI could use sensors to generate shopping lists.

Rich Rogers, Hitachi

In the last century, electricity fueled the industrial revolution, giving us powerful factories and machines. Today, IIoT and AI software are bringing them to life in new and unexpected ways.

Mahi de Silva, Botworx.ai

AI is already being integrated into IoT and even IIoT, where the machines and products are able to diagnose themselves and interact with their human operators.

Jeremy Pola, Novecom

The future of IoT and IIoT is in manufacturing user experiences that deliver advanced analytics and data visualisation. This will be achieved through collaboration of computer science and data science.

Nenad Cuk, CroatiaTech.com

I see AI systems in the near future controlling, navigating and maintaining IoT devices and products. One category in particular, drones and how they are managed. With thousands of drones in the sky, AI will need to carry the weight and manage these systems on a grand scale.

Close