How Plainsight Provides AI-powered Intelligence – Interview with CEO | DevOps Dialogues: Insights & Innovations

How Plainsight Provides AI-powered Intelligence - Interview with CEO | DevOps Dialogues: Insights & Innovations

On this episode of DevOps Dialogues: Insights & Innovations, I am joined by Plainsight’s CEO, Kit Merker, for discussion of the impacts of AI and Generative AI on application modernization, and innovative development methodologies for overall business goals.

Our conversation covers:

  • How Plainsight’s integration of visual AI and data science distinguish its approach in the competitive Computer Vision market
  • In what specific ways Plainsight addresses the challenges posed by the rapidly evolving landscape of Computer Vision technology
  • How Plainsight prioritize customers’ end-to-end business processes to ensure the success of their Computer Vision initiatives
  • The evidence or examples demonstrate Plainsight’s commitment to customer satisfaction and how has it contributed to the company’s achievements in the field of Computer Vision

These topics reflect ongoing discussions, challenges, and innovations within the DevOps community.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Listen to the audio here:

Or grab the audio on your favorite audio platform below:

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this webcast. The author does not hold any equity positions with any company mentioned in this webcast.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.


Paul Nashawaty: Hello. On this episode of DevOps Dialogues Insights and Innovation, I’m joined by Plainsight CEO Kit Merker, discussing the impacts of AI, generative AI on application modernization, innovative development methodologies, and the overall business goals. Kit, I’m really excited to hear about what your new journey is here and as a CEO of Plainsight.

Kit Merker: Well, you might be wondering why am I here on a DevOps podcast. And what’s cool is Plainsight is a computer vision company, but we’re focused on continuous updates to AI models and to computer vision applications. And so we really do have actually a DevOps spin, which is pretty exciting.

Paul Nashawaty: Yeah, no, absolutely. And the DevOps spin, when we are looking at the changes and it’s very rapid with AI and visual AI and the impacts the business and what’s happening in the space. Kit, when I think about Plainsight’s visual AI and data science kind of approach to computer vision market, what’s the competitive landscape look like?

Kit Merker: Yeah. I think for us as Plainsight, we’re really focused on offering these computer vision solutions that work for business. And we do it with a simple concept, which we call a filter. And a filter is a way of taking camera data and turning it into a spreadsheet essentially. And we know that most businesses are run by spreadsheets, even if we call it an ERP system or a business intelligence system. The cool thing here is that with the rapid change in AI, and frankly look, everybody’s a little freaked out about the pace of change in AI. No one can keep up with it. The trick that I think is important in any kind of technology change is the ability to adapt and keep up to date, whether that’s triggered by a security vulnerability or a new technology or a functional improvement or a bug, you have to be able to update your software systems. And I think that in the AI space, this idea of being able to quickly change and give updates is pretty tough.

On the competitive landscape side, the way I think about this is that customers have choices for how they want to approach adoption of computer vision. And let’s assume for a second these companies really do have a computer vision problem, they have a vision AI problem. Some computer vision use cases are made up and not really that valuable, but if you assume that they have a problem they can really solve, they already have a place to put the data, they already have a place to use the data, maybe in an inventory planning or ERP system of some sort or some automation. One option is they can use a consumer solution, right? They go out and they say, oh, I’m going to buy the security camera, some Nest or Ring, or they’re using other kind of consumer technologies. Unfortunately that doesn’t work very well for business.

They quickly outgrow it or they can’t customize it, they can’t integrate the data. That usually doesn’t work beyond very, very small businesses, small. I think the second option is what I’d call a point solution. And you see these all the time. You see stuff for slip and fall detection, you see stuff for the laser powered weeder for agricultural environment where you run this big machine that has computer vision and uses lasers to kill weeds and things like that. And every kind of business vertical you can imagine, from agriculture to manufacturing to logistics to oil and gas to retail, they all have these sort of point solutions that are popping up that are offered from it. Now, the great advantage here is that it solves one problem very well. And for a lot of businesses that’s a perfect solution for them and they should absolutely go do that. But the disadvantage is you create a bit of a silo in each of these solutions. Now if you’ve got, let’s say, three problems, and some companies are fairly vertically integrated, I talk to companies that have everything from in the food service world, from agriculture to retail stores, right? And they have to think about that entire process.

Now they’re buying a bunch of different point solutions from different vendors. And what that means is that they have silos of data, silos of maintenance, and I think most critically silos of AI stacks where these different companies are now making the decision about what AI tools to use. And as that rapid change is coming, that’s increasing the risk that you’re going to end up with a software staff you can’t maintain, okay. And then the third option that I see for companies is generally kind of the DIY/consulting approach where maybe you’re adopting a platform of some sort, or you’re bringing in some expensive consultants to help you build AI. Now the big issue here, I mean the advantage is now you’re in full control, right?

You can do whatever you want and the sky’s the limit and so is the budget, that can work. And the challenge here, number one, is every company I’ve talked to has some PTSD about AI science fair project that went wrong, but also the skillset. And if you look at the assumption that companies that could benefit from computer vision are going to have data scientists, that assumption is wrong. And so selling a platform for data scientists to build models to me is a very challenging approach. And so generally speaking, those are the approaches that I see in the market for, I won’t speak to specific companies, but generally people fall into one of these buckets. That’s generally what I encounter.

Paul Nashawaty: Yeah, that makes a lot of sense. I was at a recent event, it was that AI tech field day event that we hold. And a lot has come up in the farm use case. That use case is incredibly powerful when it comes to AI and it’s incredibly powerful to see how it works. And the amount of tech that’s involved in farming today is amazing. I was blown away by watching the process and how everything was happening.

And when you tie back all these different tech stacks and these solutions that are in silos, it really does get complicated, right? You have to have that team of developers or DevOps teams in place in order to dynamically adjust and be agile to work through these processes, these tech stacks. And I guess that kind of leads to my next question, right? When we look at these not just a tech stack, but there’s just an evolving landscape in computer vision technology, right? You touched on a little bit already with the silos and you touched on the different tech stacks, but how is Plainsight addressing the challenges that organizations run into?

Kit Merker: Yeah, it’s really fascinating. Agriculture for us is a huge, huge opportunity. I think there’s a massive modernization happening in that space. And you think about, I hate to use the word but I’ll use it, the digital transformation that’s been going on, there’s a lot of data that you can get to run an online operation by virtue of the fact that the whole system is digital, right? People come to your website, you’re able to track visitors. All the orders are digital. In the real world, in the physical world, that transformation hasn’t really happened yet. And we’re kind of seeing this happen with these different kind of industrialization. And now with companies that are trying to get better planning and better operations, there’s a shift in workers. We’ve moved in agriculture from people who’ve been in this industry for a long time to less skilled workers and different workforce populations.

All of that is having an effect, and technology is making a big impact there. And you’re absolutely right, but that DevOps and kind of the principles of software maintenance is a really important part of making these systems work, especially over time as it’s changing. The kind of end-to-end question, I mean the first thing that I look at is where can we add the most value in that space? And for Plainsight, we work a lot in agriculture. We count 40,000 cattle a day for one of the world’s largest producers of protein, JBS, and we’re doing a lot of work with them, but we’ve expanded into a lot of other areas as well. And so one of the things, again, as a CEO, we got to think about where do we focus? And I’ve gotten feedback from people, all my advisors tell me, focus on one vertical and really get into a vertical.

And actually because of my background in software and in DevOps and cloud computing, I’m looking at it a very different way actually. I think we have to focus, no question there, but we are going to focus on a particular part of the software stack. And that part is between the camera and the spreadsheet, okay. And if you think about that as a sensor, you imagine a camera as a very powerful visual sensor. And by the way, when I say camera, it could also be drone footage, it could be infrared, lidar, it could be night vision goggles, right? But you’re taking visual data and we’re putting a filter on it to pay attention to certain things and then outputting data in a format that can be used and can be trusted to do other analytics. Our part in the puzzle here is really about describing. It’s like if you see something, say something kind of a solution, and we can put that into a black box, right?

We’re using industry standard things like Docker and Kubernetes of course, and we’re allowing you to create this filter as an app that you can inject different models into. And now you can run that at the edge, you can run it in the cloud. The filter is just telling the camera what to pay attention to and outputting this event stream. It’s getting a lot of advantages. It’s getting you closer to the source. Now that the person who’s managing the ERP system can just go and stick a camera in front of the pile of gravel that’s constantly changing in size to get an accurate estimate. There’s not that game of telephone between the different systems. You get accurate data from the source directly into those digital systems. You’re getting a huge savings in cloud computing and latency. I run at the edge, that means I’m collecting video, but I don’t really need to stream the video to the cloud to process it.

I can just stream the data outputs. It’s kind of a compression in a way, right? You’re taking high intensity video data and you’re pulling out just things you care about. That also means an advantage in privacy. That means that now the data that I didn’t necessarily want to share with the cloud or put in the cloud about with people’s faces or proprietary processes or just other video scenarios that I don’t want to share, that’s all kind of protected at the edge. It can be deleted and managed separately. It can be filtered so we can do face porting before it’s uploaded. All the things you’d want to have for privacy management can all be encoded in that. And then really importantly, you can change it, right, because you’ve got this ability to insert and change the software. Now what you’ve got is really more like a cloud computing way of thinking.
You’ve got this set of apps, right?

Those apps, we call them filters, but they’re basically apps. They’re AI powered, they encapsulate models that have been derived from training processes and that is giving you the ability to control and update and secure your AI footprint, but importantly allowing your IT team to be the heroes of the AI without becoming data scientists. And that ability for an organization to kind of take ownership of the AI. And also I think this is the other important part is I think a lot of companies are trying to adopt AI to get a strategic advantage. One of the difficulties is, okay, so I give you all this data and all this training and all this feedback loop to help make the models better, and then you’re going to turn around and sell that to my competition.

I hear this from everybody, data sovereignty, AI sovereignty is a big issue for companies that are trying to figure out the right move and they feel like right now what’s going to happen is they’re going to give up a lot of proprietary advantage that’s going to just go to their competition. The one thing that we do differently, I think, than any other company I’ve seen is we will not take the data from one customer that we use to train a model for them or to fine tune a model for them. We won’t take that and use it for a pre-trained model that we resell to another company. What this gives you is that these companies can now create a strategic advantage of having a proprietary AI model, solve a specific business problem across many places in their software stack update over time and not have to have a deep ML and AI expertise or the cost and complexity associated with that type of custom project. And I think it’s a winning combination. It’s resonating very well with customers.

Paul Nashawaty: Kit, what I really like what you’re doing here is the fact that you can focus on 80% of developing towards the stack, but you’re addressing a number of different use cases. And by addressing those other use cases, you’re methodically focused on developing the stack, which is brilliant, but it’s also giving you the ability to have a broad range of use cases out there. And that’s what I think the AI piece is. That’s one part. The second thing I’d really like to understand from what you described is there’s a lot of discussions around using existing data sets. When you’re looking at these LLMs that you have in place or these other, whether it’s private AI or public data sets or whatever it may be, those different data sets are already existing within many organizations. How are you doing that and how are you taking that methodical approach, applying it into the existing data sets so you can use it across these multiple use cases?

Kit Merker: Yeah. The data sets, it’s like where do models come from, right? Where do they come from? And there’s a certain amount of harvesting that has to be done in order to create, especially these sort of foundation models and the things that we see from the sort of OpenAIs and Geminis of the world that are producing these very large and consumer oriented type solutions to start and eventually definitely used in business settings as well. This is something where foundational research and academic research as well as these big platform level game changer companies are really lighting the fire around AI. And so the way that works for us, one is if there’s a publicly available data center model, of course we’re going to use it to the extent we can, and that’s a great place for us to start.

The second part is we know that for businesses to get high accuracy, they’re not interested in getting a haiku written about their inventory. They want to see the actual data. And again, it comes back to our role is to describe precisely what we see and to use a filter to focus the attention of the camera on the thing that we care about. And we do that at the edge, that means we need a small model. We actually can’t use a large model at the edge. We run a small model. The filter and a series of filters gives you the ability to kind of single purpose those models that are trained for that. The way that works for us and how we take advantage of the large models is through the training process. A big part of it is we collect video data from customers. We put it through tagging, labeling, we do supervised deep learning, and that produces a model. That’s something we can do without much interaction with the customer, to be honest.

They give us some sense of what they’re looking for. The problem comes in when you start to want to detect or you encounter rare or sort of difficult events. For example, working with a tuna fishing company, we got to make sure that there’s no endangered fish in the nets. Okay. How do we create that scenario to train the AI, right? I can’t actually put endangered fish in the nets. Well, this is where generative AI is a very powerful tool because what we can do is we can take pictures of the nets and the fish and the stuff we have readily available plus pictures of the endangered fish, and we can combine that together and generate images that we can then feed into our training process as the exceptions we’re looking for. And so this synthetic data enrichment really helps us fill in those corner cases.

Hey, look, we got a bunch of cows out in the field grazing. We want to count them. Okay, well what about seeing it at sunset? What about seeing it while it’s snowing? What about in the fall and different colored cows and different lighting conditions? We can generate all those different variations without having to see it in the real world. This applies to other safety issues, like we want to detect flat tires on trucks, we want to detect derailments of trains, we want to detect other maybe criminal behavior, et cetera. The more that we can kind of enrich the data set without actually having to encounter those things in the real world, the better. And so that’s one technique we use as a way of enhancing the data. And again, how we keep the data and the AI models kind of safe and proprietary is that we can then use that to add onto a particular customer’s data.

And the result is that they can get very high accuracy in their environment without having to spend a lot of time kind of engineering different conditions or looking for these cases. And then of course when we do encounter a novel case because of the principles of DevOps, we’re continuously taking feedback and creating a feedback loop. We can say, oh, we encountered something we hadn’t anticipated. Model went haywire. Let’s learn from it. Let’s analyze it, now let’s go update the model. And guess what? Now we can deploy it into the filter in a very predictable way because we have that path to production inspired by this DevOps technology. And because we’re using standard IT technologies like Kubernetes and Docker, that update process is very straightforward. I think that’s really the keys here is how this feedback loop works so we can make it better as opposed to saying, hey, is the model good enough to get me off the ground and is the model going to give me the accuracy I want?

We kind of assume that that’s a moving target and achieving accuracy is a process actually. And so it’s not about just setting a goal and saying, oh, we got to be three nines accurate or else it’s not worth doing. Instead, it’s saying, okay, is this better than the alternative of doing it by hand? Do we have a method for dealing with false positives, false negatives? Do we have a method for improvement? Do we have a method for data enrichment? We do encounter those cases. We can get something that’s more sophisticated next time around. And the faster we can create that learning cycle for our customers, the more productive it is and the more value they get out of that system. It’s not directly related to can AI do it? The question is can we build a process by which we get the outcome we want?

Paul Nashawaty: I want to come back to the process piece in just a second because I definitely want to double click down on that. But a couple of things you mentioned that I found really interesting, I was just recently doing a customer interview and we were talking about Smart City and using the approach for computer vision for Smart City and the security element that you were talking about. When they’re using these cameras to show when somebody is walking down the street, they can show the compression of the sole of the shoe to identify the weight of somebody based on their height and their weight. I thought that was brilliant. I thought that was amazing ’cause if you look at a scenario, say if there was criminal activity and somebody starts running down the street, you can get some statistics about that individual and narrow it down on that profile.

That’s a real life use case, how it’s being used. And definitely it has to be done quickly, right? Because time is of the essence, if there’s an activity that occurs, you have to be able to react very quickly to the action that occurred. Which leads me back to process, and you mentioned process, and I want to double click down on that. What I find in my research is 18% of organizations in a recent study are using AI and gen AI in their production workload capacity, 18%. It’s relatively small. 27% are evaluating AI and gen AI kind of approaches in this research. And the interesting thing about it, it kind of leads me to of course the process, looking at end-to-end business process and using computer vision initiatives to kind of get there. But I also want to kind of tie that to your thoughts on the maturity of organizations.

I know that this is relatively new for a lot of organizations, but when we talked and you briefed me on what you’re doing, I was really amazed at what you’re showing for the output and the impacts to businesses versus somebody sitting there with a clicker clicking when somebody walking by. You’re showing just a real live scenario and computer inputs that show this, adding that to the spreadsheet and then being able to use that data to do processing. Can you talk a little bit about the adoption rate of maturity and where you see it going?

Kit Merker: I think so first of all, thank you. The adoption I think is one of the biggest challenges. I mean, the way I like to put it is computer vision is a solved problem, okay? Computer vision has been around for a long time. I think it’s been reinvigorated frankly, from the hype around AI. And I mean that I’m very grateful for the hype, don’t get me wrong. But we see the ChatGPT, LLM, Gemini and OpenAI, these very big platforms that are bringing AI to the masses is reinvigorating the interest in computer vision and making people kind of question like, well, should I have this Everybody needs an AI story, okay. Everybody in 2024 needs an AI story for how they are going to adopt AI in their business. And I think, like I mentioned before, there’s a bunch of different ways that you can approach it. And what I’ve seen is that these companies assume the cost and complexity, and they’re often right.

The cost and complexity will not work for them unless they’re the biggest companies in the world. And so there’s this very clear, underserved mid-market where the company’s big enough that there’s a business need, right? They’re not sort of a small business or a food truck or whatever. They’re doing some real business. They’ve got real operations, real problems, and they can make investments, but they’re not so big that they can waste time and money experimenting, right? Trying stuff out, especially not big ticket items where you’re going to pay a data scientist an incredible amount of money. It’s such a big bite out of their budget and they have no idea if it’s going to succeed or fail. Risk, complexity, costs, these are the things that drive a slowness to adoption. If you think about what Plainsight is bringing into the world, and what we’re really doing differently is we put the adoption issue as our first top order issue. Okay? We said, how do we make this easy?

How do we make it so that the business person who has a real problem can understand the path to it getting into production and give them the data that they want and also understand the constraints of it. It’s not magic, right? We help people understand it’s not magic. What’s going to happen if it’s wrong from a false positive, false negative perspective? How are we going to put guardrails around it, et cetera, et cetera, and then making sure that their IT teams already know how to deploy it. The biggest gap is that when you go to a company, say, Hey, let’s do computer vision, they look at each other and they go, well, I don’t know how to start on that. I don’t know what to do. I’m reading some blogs about machine learning. And by the way, trying to do that, it’s very expensive and you dabble, you get frustrated, it doesn’t solve the problem, and that is really problematic.

If you kind of look at it the other way, so what do we do to make it really, really easy to adopt it? And use your existing cameras, not need AI expertise, make sure that privacy and security and everything else is built in. Make sure it’s future-proof so that if AI changes, you’re not going to regret your decision, right? That you’re going to be able to use different AI over time. Make sure that the data is not going to feed back into competitive things, you’re not going to accidentally give away the keys to the farm, right? Show an immediate business impact and be constrained about what it can actually offer. And if it works, be able to scale it up and not say, okay, well we get this first thing, but now if I multiply that by the number of cameras in my environment, I’m going to go bankrupt. You need to have all those things kind of checked in order to do that. We’ve worked backwards and I’m a big believer in working backwards from the customer. What is going to be really great for the buyer of this who’s really going to get a lot of value out it and how are they going to live with this solution? And then when you think about the product, you can create the constraints of that product. And then the question becomes, well, how do I make the technology work?

Well, like I said before, the technology is there. We’re just taking the ingredients of computer vision capabilities that have been around for a long time. Kubernetes is celebrating its 10th anniversary this year, by the way. Kubernetes, I think officially is legacy software amazingly enough, right? We’re using Docker, we’re using these other things that are easy. It runs on Linux, right? And we over time can expand it to other sort of embedded solutions and things like that. But there’s also an inflection point on the hardware, let’s be honest, right? The NVIDIA hardware has gotten really good and has gotten relatively inexpensive for small stuff. The big stuff, running an LLM at the edge for 200 grand a box or whatever is kind of not possible, but there’s quite a few different edge computing capabilities that can run this stuff, and they’re coming from a variety of vendors.

We’re working with Soft Iron, for example, which is a great edge computing company we’ve been working with closely. And there’s just so many different ways that you can take this solution now. And it used to be expensive, now it’s a reasonable cost. You see the ROI. Used to be complex, now it’s simple. Used to be risky, now you can have confidence in it. That combination just leads to higher adoption. And so once people learn that this is not the assumptions they have in their head, that the reputation of these AI projects, which is that it’s science fair stuff, and you got to be beyond a rocket scientist to get it working. Once you realize that that’s just not the case anymore, then people get excited and they move forward. And I’m seeing that across the board. We’re seeing really inspired customers who want to see this come to production. They see the value of it, and it’s frankly very exciting. I’m amazed at the passion that people have around this.

Paul Nashawaty: Yeah, absolutely. And the thing is, you touched on a number of areas that really resonate to where I’m seeing with organizations today, right? Complexity is one of the top five challenges that CIOs run into. And when you walk into a scenario and you say, look, you need to be using computer vision, like you were saying, and they go, what is that? How do we get set up? We have too many things going on now. We don’t need to add something else into the mix, right? This is where I think Plainsight really provides that frictionless approach to kind of get in, right. And that’s what I was liking about the way you were describing it, and that’s one of the reasons why I thought it would be valuable to share this kind of conversation. But one of the things I do want to talk about is we kind of talked about a lot of hypotheticals. We talked about a lot of where things could go and where things may be going and very well are going. And I think what will be helpful to the audience is to share your own examples of your customer’s examples that, you touched on a couple already, but I think of just real life examples to say they were really complex. They really had these high approaches of getting something done, but now with Plainsight, they were able to get it done quicker and faster and less expensive, right? I think that’d be helpful.

Kit Merker: Yeah, yeah, for sure. I mean, one of the examples that we have for a long time now, and it’s quite been well benchmarked as the cattle counting. We actually went viral with a sheep counting video recently, and we get a lot of interesting inbound requests to help with cattle counting, which I think is really important one. It’s not easy to do, and the protein production side of the cattle counting is one side of it. The other side is the auctions and where cattle are sold at auction, and it’s a tough business. You’ve got this bunch of cattle showing up, a bunch of people buying stuff, an auctioneer is shouting out numbers, and you got these poor ranch hands trying to count the cattle. They all keep their numbers separately. And then what I learned, I didn’t know this until we got this customer in Oklahoma and I found out the auctioneer every night goes and sits down after dinner and reviews the tape, makes sure that the count’s right. You got multiple people doing this, and it’s a real life impact and it’s a big event for them to have the auction, right?

All this cattle’s coming in and all the ranchers are having a great time. That’s one where we can eliminate that uncertainty and get the count to a nice number. But we also work with a large mushroom producer. And one of the interesting things about mushrooms is that first of all, they grow three times. And second thing that’s interesting is, which I didn’t know, and then the other thing that’s interesting is that in the states, aesthetics matter a lot to shoppers it turns out. And there are mushrooms that are perfectly edible, but once they get overripe, they change the color a little bit and people don’t buy them. And so there’s a lot of what’s referred to as shrinkage in that market because of that. And so with Plainsight, that can now be detected and we can find the optimal time for picking to avoid that discoloration, which means obviously better profitability, et cetera. And the ROI on that is kind of insane.

I mean, I penciled out the ROI numbers and I’m like, well, this is embarrassing. I can’t say how much the ROI is in this. It’s too much. That’s another great example. Dams and bridges are another great example where you have fish, you got to do surveys. We help a marine biology company do those dams and survey bridges surveys, a company called Marine Situ. We just recently announced them. Another place that people can go and see it themselves, which I think is interesting. And again, this is kind of early stages, but is a website that we launched. What we did is we took the 1,547 public camera feeds in Washington State where I live, which are already up and running, they’re already published to the web. We took all 1,547 of them and we created a wildfire detection filter. And every five minutes we run the filter against those, and we basically provide a score of how likely it is that it’s seeing fire versus seeing safety. Now, as I mentioned before, accuracy is a moving target. In the very beginning, we had tons and tons of false positives.

It thought the whole state was on fire. We’ve now refined it so that the fire alarms are much less. We’ve been partnering with the Department of Natural Resources in Washington to get feedback and also to look for signals of false positives and false negatives, early warning signs, et cetera. You’re starting to look for things like power lines that maybe indicate sparks are coming. We can do something like that. We still have issues with headlights, we still have issues with sunsets, we still have issues with lens flares, and we still see smoke sometimes we think, or smoke or fog that we think is fire. But the great part is now that we’ve got the data set and we can continually improve the accuracy, we’re seeing much, much higher accuracy in the site already. We want to expand that. We want to get more cameras, more researchers, more firefighters engaged and involved helping us tune that model. And then my hope is that that will become a public resource. I don’t want to make money off of wildfires, to be very honest with you.

There’s plenty of people in business where we can support them, but I think it’s a great showcase for what the technology can do and especially how the technology can improve. I think in the real world, there’s so many examples of everything from drive-through analytics to bottle filling, to the cattle, to everything else I talked about. And this is not science fiction, this is not a, oh, maybe someday kind of thing. This is something that these companies are already relying on and we’ve just made their life easier and better. And guess what? Not a single one of them had to write a line of code or hire a data scientist to do it. They’re literally licensing the software from us. They’re able to install it using common IT level commands like Docker run and simple stuff. And it really is an exciting thing to be able to take what seems like just such futuristic technology and make it so just normal, everyday people can just start using it.

And they really do see the… And by the way, normal, everyday people are sophisticated. That’s not their problem. They can understand the technology, but that doesn’t mean that they have the deep skills or the attention span or the budgets to cover these very expensive solutions. And so I think for us, just meeting them where they are is really the key. And I’m excited that that is a real world thing actually. I don’t want to be in the hype business. I want to be in the solving people’s problem business. I’d rather not talk about AI. I’d rather just tell them it’s magic and have them buy it because it works. You know what I mean? But yeah.

Paul Nashawaty: Yeah. Kit, it’s amazing. I mean, it’s a far cry from the watchtower looking for the fire spotters out there, climbing up to a tower to what we’re doing today in just a short period of time, right? It was just talking about 20 or thirty years-

Kit Merker: No, no, listen, it’s still happening. Let’s be very honest. Firefighters today are monitoring, actively monitoring screens. And when we talked to DNR about this, what they told us is that they already have all these cameras. They’re using them already. They’re just the same public cameras. And there were some initiatives in Washington state, I won’t get into the politics of it, but there was some initiatives in Washington State to spend a lot of money, a lot of money, a lot of taxpayer money on AI solutions that weren’t ready and weren’t ready to produce value and weren’t making a difference. And as a Washington State person, and by the way, I don’t care where you are on the political spectrum, you can be on the right wing and be upset about the spending. You can be on the left wing and be upset about the environment, don’t really care.

Fire doesn’t care where you are on the political spectrum. This is something that affects everyone. I think it’s an important issue for public safety. And for things like Washington State, one of our big outputs is wine. We make a great wine region here in Washington state. In 2021 was our worst fire season on record. Guess what? Bob Batts, one of our winemakers, wouldn’t release the vintage because the smoke was so bad that year. Whether your house is going to get burnt up or your kids are going to breathe the air or you’re going to breathe the air or you can’t drink the wine you want, it affects so many things. And this is a reality that we have to be prepared for. And so if we can do a small part in that by using technology that’s readily available, I’m going to do that and I want to see it happen. And I don’t want to hold anyone ransom for it. I want it to be something that’s available to anyone.

Paul Nashawaty: Kit, this is great. And what would you recommend to the audience if they wanted to learn about more information on this?

Kit Merker: Yeah, if you go to, that’s where you can find out about our company. And of course we have listings for filters and filter box, which is our containerized runtime for filters. You can try it out yourself. We have some demo filters, including Seymour Pong. You can play Pong with your hands on the screen. And Seymour is our mascot, the elephant, because we like to help people see more business. Also, is a great place to go check out what we got going with the fire detection in Washington State. And then if you want to follow me on, I still say Twitter, on X, @kitmerker, and I’m always happy to chat with anybody. And if anybody has any computer vision questions or ideas, always happy to talk. No obligation to buy. I’m happy to talk to anybody.

Paul Nashawaty: Kit, as we’re wrapping up here, I clearly can see Plainsight has a bright future ahead of it. There’s a lot happening. Maturity is growing. Companies are going to need technologies like yours to get it in motion. I want to congratulate you on your new role. It’s really exciting and you’ve done great things here. I also want to thank you for your insights and perspectives on this session today. And I want to also thank the audience for attending and recognizing the importance of this topic. And also, if there’s any additional information you would want to talk about, please feel free to reach out to us. And thank you and have a great day.

Kit Merker: Thanks, Paul. Great to be here. I really appreciate it and hope we get to talk again.

Paul Nashawaty: Likewise. Thank you.

Other insights from The Futurum Group:

Application Development and Modernization

The Evolving Role of Developers in the AI Revolution

Docker Build Cloud Aims to Revolutionizing DevOps

Author Information

Paul Nashawaty

At The Futurum Group, Paul Nashawaty, Practice Leader and Lead Principal Analyst, specializes in application modernization across build, release and operations. With a wealth of expertise in digital transformation initiatives spanning front-end and back-end systems, he also possesses comprehensive knowledge of the underlying infrastructure ecosystem crucial for supporting modernization endeavors. With over 25 years of experience, Paul has a proven track record in implementing effective go-to-market strategies, including the identification of new market channels, the growth and cultivation of partner ecosystems, and the successful execution of strategic plans resulting in positive business outcomes for his clients.


Latest Insights:

Nivas Iyer, Sr. Principal Product Manager at Dell Technologies, joins Paul Nashawaty to discuss the transition from VMs to Kubernetes and the strategies to overcome emerging data storage challenges in modern IT infrastructures.
Shimon Ben David, CTO at WEKA, joins Dave Nicholson and Alastair Cooke to share his insights on how WEKA's innovative solutions, particularly the WEKApod Data Platform Appliance, are revolutionizing storage for AI workloads, setting a new benchmark for performance and efficiency.
The Futurum Group team assesses how the global impact of the recent CrowdStrike IT outage has underscored the critical dependency of various sectors on cybersecurity services, and how this incident highlights the vulnerabilities in digital infrastructure and emphasizes the necessity for robust cybersecurity measures and resilient deployment processes to prevent widespread disruptions in the future.
On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss CrowdStrike Global meltdown, Meta won't do GAI in EU or Brazil, HP Imagine AI 2024, TSMC Q2FY24 earnings, AMD Zen 5 Tech Day, Apple using YouTube to train its models, and NVIDIA announces Mistral NeMo 12B NIM.