Webinar Summary: State of AI in Mining
Webinar Replay:
Download Slides:
Summary:
We spent the last four years talking about what AI could do. In 2025, we finally saw what it does do. CEO Ravi Sahu presents the need for AI within the mining industry, how the AI hype is gone, how AI is being used for practical tools, physical AI vs LLMs, mine-to-mill fragmentation and integration, pocket AI and its usability, and more!
Polls:

Q&A:
Can you discuss the differences how LLMs and Strayos's computer vision are trained and worked?
Yes, so LLMs are trained on data that exist on the internet. Any social feed, lots of books, as well as the wikipedia. Anything that's related to the text will utilized to build this LLM that's continually evolving. Strayos models, we predominantly focus on the computer vison, the machine vision. It takes the data such as images, LiDar sensors, or any other information where you can build information from the pixels. Images, videos, LiDar... we structure that information to find patterns. The typical difference in these models is that these are all visual models. This means it utilizes the networks, which is trained to identify different aspects and the pattern recognition.
Can you explain why Strayos isn't subject to some of the problems LLMs have such as bias, hallucinations, synciffency? (Synciffcency is when chat gpt tells you how wonderful you are all the time and how great everything you do is.)
So I think what I call, hallucinations does exist. It's more in terms of predicting the wrong things from the visual perspective. For an example, if there's a haul road and it has predicted that a bomb which is not necessarily a full bomb but ther's a pile of rock on the side of the road. Those things does exist. What we call the model anomalies or the false positives. We go and retrain those false positives. There's a continuous situation there. Another example is in terms of the rock detections. Alot more bias coming in size detection. We go and retrain those size detection models. In comparison with the LLMs, you have the bias. The LLMs are bias not because it is giving you the wrong information, it's because how it is trained. It's limiting a certain set of knowledge. It may feel like bias but it is resulting from a limited set of information where the model is limited.
Where does the design measure learn loop break most often in live operations and why?
The design learn loop really what we are seeing it works consistently well in the one department phase which means if you have a blasting department or a geology department, they may update the rock model there. Once going into the mill side, this is where the biggest disconnect happens . You don't have that information readily available until you start to put a process and you start to measure things more effectively. That's one of the areas we are doing alot of work. How do we build the sensor layer of data collection so that this collection of data comes alot more easier. This is where we see alot more gaps.
What failed more often in AI projects: the models, the data quality, or the operating model.
In our experience, what we have seen is a whole lot related to the data quality. If the data is not good at the very first level, the model fails. We can train this data to find out the data which is bad, but it has not reached to that level yet. The data is the key factor where the model underperforms. The other aspect of the model underperformance is the model has gone through a continuous process and it does require teaching them continuously. The teaching also required the calibration of the models and if the calibration is not happening from the user input side, that's where the model will stay stagnant and it will not continue to improve. The user input is another driving factor along with the model training which is from the AI engineers who specifically train.
How concerned should we be that the AI will go rogue and destroy humanity?
For now I think we're at the stage where we're finding basic use cases so I'm not worried about that.
What is general AI?
I understand the question as what is general artificial intelligence which is creating one model that has answers to everything. That's the general intelligence which is the level of the human intelligence. We are far to away from that. We're still solving word models which is teaching the physics requires alot of computing and training and we are not at that stage, having the general intelligence.
Presenter:

Ravi Sahu, CEO and Founder of Strayos
With a background in engineering and over a decade of experience in digital transformation and product management, Ravi worked with Fortune 500 companies worldwide before founding Strayos. He holds an MBA from Washington University in St. Louis and is knowledgeable and expert in leveraging AI.
Ravi has taken Strayos to the next level by developing advanced computer vision and machine learning solutions to optimize operations, improve safety, and enhance efficiency in the mining industry.

New technologies are rapidly changing the drilling, blasting, mining, and aggregates industries, empowering them in ways never before possible. Make sure you are taking advantage of the best tools available.
Check out our 2 Free E-books on AI applications for the drilling, blasting, and mining industries to see all the amazing advances that are available.
AI Guide for Drilling and Blasting
AI Guide for Mining
Watch our videos:
YouTube