RJ Lee Group 5 min read
Failure Analysis: Moving from Fragmented to Unified Data in Steel
Contributors
RJ Lee Group
Subscribe to our newsletter
Failure Analysis: Moving from Fragmented to Unified Data in the Steel Industry

At AISTech26, we heard from many attendees about their persistent data challenges. Despite having more failure-analysis data than ever, we rarely face a lack of data for failure analysis and problem resolution. Thanks to SEM/EDS, EBSD, metallography, hardness maps, XRD, chemical data, and on-demand mechanical tests, enormous quantities of data can be generated. However, all too often, we still lack the information needed to confidently identify the root cause.
The real challenge is integration: Fragmented pools of data spread across instruments, file shares, LIMS/ELNs, spreadsheets, and reports make it impossible to understand the whole story. What you need is a way to assemble data into a traceable, contextualized story that not only generates the data but also links it together and provides data provenance and process history.
“We have a data warehouse” is no longer enough.
Scientific workflows and failure analysis processes are often data-rich and evidence-poor.
Too often, we have to manually stitch together pedigree, instrument outputs, process logs, spreadsheets, and other data points to tell the story behind a failure and determine the best corrective action.
Failure investigations depend on relationships and context: heat/lot-to-serial traceability, preparation and instrument settings, calibration state, evolving process windows, and model/analysis versions.
In addition to the time and effort required to collect and merge data manually, gaps across data sources, traditional data warehouses fall short when linking data across repositories:
- Mismatched data formats, values, and definitions - teams spend more time cleaning and reconciling data and less time analyzing the actual failure.
- Critical context is scattered – tests are duplicated, and false leads may be followed due to hard-to-discover details.
- Reliable source verification – investigations are slow, and results are questioned because decisions cannot be quickly verified (or defended)
- Moving target – calibrations, models, and even definitions change over time, but the data isn’t consistently versioned. Meaning the same metric may not have the same definition throughout the analysis timeframe.
The High Cost of Data Gaps
Data gaps break the chain of evidence needed to confidently prove “why something happened” and how to prevent it from happening again. In short, data gaps turn a data-rich workflow into an evidence-poor investigation.
Teams can improve data integrity and the ability to manually mine and weave data from multiple sources by adopting and following best practices, including a common data dictionary, standardized naming conventions, and capturing a minimum set of metadata keys for analysis. Using repeatable investigation checklists to ensure all cases share the same assumptions, decision points, and underlying data links.
Further Reducing Evidence Gaps in Failure Analysis.
Closing the evidence gaps requires more than additional resources or an updated data warehouse; it requires an evidence system. RJ Lee Group’s AKM-SEAMS™ connects disparate data from across instrument sensors, reports, and process data through ETL pipelines into a domain-driven graph.
The ontology-driven graph created by AKM-SEAMS™ enables data to be found, linked, and verified contextually. Correlating diverse formats into a single queryable view with AI/ML, AKM-SEAMS™ helps teams move faster and make more confident recommendations by:
- Consolidating SEM micrographs, tensile test readings, E56/E2142 ratings, chemistry results, and cleanliness scores into a single queryable repository.
- Providing the analyst with the ability to compare multiple heat samples across test categories from the platform.
- Supporting cross-domain queries such as "Which heats with alumina-dominant inclusions also show high UTS (ultimate tensile strength)?” by connecting data that traditionally lives in separate lab systems.
- Linking each heat to its inclusion populations, mechanical properties, and process metadata through a structured ontology.
- Providing on-demand analytical probes like the Dmax distributions, class composition breakdowns, and tensile comparisons. Steelmakers can apply these findings to any combination of heats, grades, or time periods to spot trends, diagnose quality deviations, and benchmark process changes.
By unifying fragmented quality data into one platform, AKM-SEAMS reduces the time from sample collection to actionable insight — enabling faster heat disposition decisions, earlier detection of process drift, and data-driven optimization of secondary metallurgy practices.
Big data doesn’t drive results. Knowledge does: AKM-SEAMS helps organizations bridge the gap—turning complex scientific data into insight, understanding, and confident decisions.
Connect with the RJ Lee Group team to discuss how AKM-SEAMS can be applied to your specific scientific or engineering challenges. Contact our software services team to learn more or schedule a demo.



