How to Use A.I. to Explore New Markets
Recently, a client asked us to help them investigate a market that they weren’t currently competing in, but were considering. They asked us to compile a report on the top vendors in the space, from which they would buy products to understand how well the top vendors were meeting the requirements of that market.
While the market itself was not an area where we had experience, what we do know is how to design, build, and streamline processes. Using a mixture of three top A.I. models (ChatGPT, Claude, and Gemini), we developed and verified information about those top vendors.
Click here for a flowchart depicting a visual representation of this workflow.
Initially, using a consistent prompt for each of the main A.I. models, we initiated Deep Research reports from each model, resulting in three reports, one from each A.I. After reviewing the reports, we used Google’s NotebookLM (NLM) product to upload the three reports to a new notebook. We asked NLM to compare and contrast the findings, and provide us with a single report, explaining the market and providing us with a ranked list of vendors based upon the reports and the additional criteria we specified.
We reviewed the combined report from NLM (note how we are keeping the humans in the loop to ensure that the A.I.’s meet the requirements and don’t hallucinate), and selected the top six vendors from the aggregated data.
Because you should still double-check A.I. work, especially when it’s important data, we wrote up a new prompt to initiate a Deep Research report on each of the six top vendors. (The earlier deep dive was not vendor-specific; rather it was a comprehensive review of the vendors in the market.) We asked each of the three A.I.’s to perform Deep Research on each of the six vendors, resulting in six reports from each A.I., for a grand total of 18 reports.
We didn’t stop there.
Uploading all three reports for a vendor, we then asked Claude, our favorite A.I. for this kind of work, to read all three reports on Vendor 1, compare and contrast the reports, then research the differences to verify the truth. We repeated the process for each of the six vendors. Here again, we used multiple data sources to reduce the number of hallucinations by these A.I.’s. This phase resulted in six comprehensive reports, one per vendor.
We returned to NotebookLM with the six reports, uploaded them, and asked NLM to again compare and contrast, this time including a suggested Bill of Materials and estimated costs for equivalent hardware from each vendor. (The costs were estimated from publicly available information, such as the vendors’ websites, retail stores, and public bids such as RFQ responses.)
Finally, we approached the end of the process! Humans again reviewed the report from NLM; while we took significant steps throughout this process to minimize A.I. hallucinations, we were preparing a final analysis for our customer, and it was critical for both the customer’s needs and for our reputation that our data was accurate. We created the final report once comfortable with the aggregated research data. We presented the report to the client on time, on budget, and we exceeded their expectations.
===
Throughout this process, humans reviewed the reports at each stage, and in some cases we refined our prompts or criteria to ensure the output we received met our needs; while this lengthened the process, it also resulted in a more-accurate analysis.
Now, this may seem like a lot of effort, and it was; managing multiple A.I.’s through multiple rounds of research took significant time and organization, but the data reviews by humans were still the long pole in the timing tent. We did this because we believe this level of research, review, and oversight is needed with today’s tools.
Ultimately, the A.I.’s performed deeper reporting in a shorter timeframe than a human could have done; but the error rate of modern A.I.’s is still too high to trust implicitly. When conducting our project post-mortem review, our team discussed whether this was worth the effort (it was), where the A.I.’s exhibited weaknesses in their analysis, and how we could refine and streamline this in the future.
Recently, a doctor wrote an article in the New York Times encouraging the use of A.I. in medicine; his salient point was that the A.I. doesn’t need to be perfect; it just has to be better than the doctors are. We feel that this process, while not perfect, resulted in a final product that was more accurate than a human could be performing the same research, and in a shorter period of time. We will continue to refine this process, adding or removing steps as needed, or new A.I.’s, or whatever new tools are released, etc.
We hope you learned something from this article; we most certainly learned a lot:
Do use the A.I. tools at your disposal; there’s little to no cost, and you likely will end up with a better product.
Do not trust everything the A.I. tells you; human validation is still required to ensure accuracy.
Combining multiple sources can reduce the rate of hallucination (this works with humans too).
Giving consistent prompts to the A.I. models results in more cohesive results.
Determining a workflow in advance gives you the ability to shape the outcome to the quality that your customers demand.
Cairo Networks is available to assist you with your data analysis needs; to help you produce better business outcomes in a faster timeframe; to streamline your existing business processes where possible; and to smoothly integrate A.I. into your business workflows where it makes sense to do so.
(By the way, this blog was written by Michael Landry, CEO of Cairo Networks, and a human.)