by David Mosher
Rhindon Cyber was a co-sponsor of the ALTS Miami (Alternative Investments) Conference this week at the iconic Eden Roc resort in Miami Beach. The investment firms present were hedge funds, family offices, private debt, private equity and venture capital firms. All keynote and panel discussions during the event discussed Artificial Intelligence. Every Alternatives sector was somehow investing into AI based on their strategy – whether it’s physical infrastructure like data centers or power plants for data centers, emerging AI vertical software companies, or AI support services (e.g. data providers, tools). Further, every investment professional that spoke was using AI in their workflows. The level of adoption ranged from asking public large language models to summarize documents, to AI meeting note takers to training in-house models with multiple signals (data) and having AI agents act on those signals.
On Tuesday, December 9, 2025 I had the privilege of hosting a roundtable with alternative investment professionals. We discussed the intersection of cybersecurity and AI, the current state of AI adoption in investment firms, and key risks to be aware of. Below is a summary of my opening remarks.
Pragmatic Artificial Intelligence Adoption Principles for Investment Professionals
Artificial intelligence (AI) is still early in the adoption and capability cycle, and there is a lack of standardization on the best ways to “operate” AI within investment firms, regulation is emerging from many sources, and cybersecurity risks abound. I see a lot of excitement around AI’s ability to create competitive advantages and efficiencies for investment firms, and more than a few risks. There are three simple principles to help safely navigate this early era of AI adoption.
- Visibility, show your work
- Know your threats
- Govern AI (like you do everything else)
Visibility, Show Your Work
Adopting AI tools/generative models greatly increase productivity when done well. This could lead investment analysts and managers to become lazy via an overreliance of the output without critical analysis. Regulators are aware of this tendency, and over time it’s likely that regulatory agencies like the Securities and Exchange Commission (SEC) will leverage existing rules and develop new rules that apply to AI usage. For example, it could be argued that the Books and Records Act applies to AI output. Today the SEC asks for notebooks, or recordings from expert calls or notes on management meetings. Tomorrow they may ask for a list of prompts used in generative AI models, and likely the outputs generated. This means that prompts and outputs should be saved, audited, and available for regulators to review.
Know Your Threats
Percy Spencer was a Raytheon scientist who walked in front of a radiating “magnetron” (radar) in a lab in 1945. The peanut chocolate bar in his pocket melted and he was intrigued. He brought in some popcorn and the bag exploded all over the lab. Finally he exploded an egg in a pot. In 1946 Percy filed patents for the first commercial microwave. It wasn’t until 1972 that the Consumer Union (know for producing the Consumer Reports magazine) published an article basically saying “We’re not so sure that microwave ovens are safe.” That left 26 years of microwave oven use BEFORE fully understanding the negative health impacts from being exposed to microwave radiation and electromagnetic frequency leaks.
AI use today is similar to Percy in 1945. There’s a lot not known, so it’s important to be diligent and understand/mitigate threats that are known today. Specifically for investment professionals, some threats to consider. First, alternative data in large language models. The regulators want to know that an investment firm has done diligence on ensuring that material non-public information (MNPI) is NOT in the data used to make investment decisions. This means that investment professionals own the responsibility for not using MNPI in their models. If you are going to ingest alternative data – it’s probably wise to not rely on third party attestations alone, but to also provide some level of statistically significant validation sampling of the data yourselves.
There are a number of cybersecurity threats for large language models, and the OWASP Foundation maintains a list of the 2025 top 10 risks for large language models (LLMs). I want to present the top four for awareness, and this is an area where your cybersecurity team should be heavily involved.
- Prompt Injection attacks. Adversaries can get to a system prompt and essentially have full model access. I’ve seen security researcher demonstrations where this can happen in 30 seconds for some of the publicly hosted large language models.
- Sensitive information disclosure. Of course investment professionals intuitively know they should not share confidential information with public large language models. In 2023 Samsung employees uploaded confidential source code to ChatGPT. Couple that with risk number 1, and it means that not only was the Samsung IP available to OpenAI, it was also available to any adversaries who cracked OpenAI and got to a system prompt.
- Supply chain attacks. How are your suppliers using AI, and how do you know that they are using it to provide accurate decisions or outputs?
- Training data poisoning. Adversaries can poison data used to train models. For investment professionals – this could be inaccurate financial information, MNPI data, or unverified source data.
Govern AI (like you do everything else)
AI must be governed in your firm. You already mange technology risks, cyber risks, and operational risks. AI is just “another one of those”. There are multiple US and European regulatory efforts emerging and already in place around AI. Federal regulations from the Consumer Protection Bureau as an example. If you are making decisions that involve consumers – all factors have to be explained in those decisions, even if they came from a “black box” AI system. Dodd Frank now includes some AI regulations. There are widely varying and potentially onerous state level regulations in California, Colorado, Utah, Texas and New York. For a summary of the current US regulatory efforts – see one of our previous blog posts.
I sat through the live streamed SEC Investment Committee meeting last week, where there was a subcommittee report approved with some new disclosure rules. The rule has a way to go for final approval, but should be considered indicative of where the current administration is headed. For details on the rules, see this Rhindon Cyber newsletter issue from last week.
There is some good news in the form of early AI risk frameworks. The National Institute of Standards and Technology (NIST) has the AI Risk Management Framework and ISO has an AI management standard now as well – ISO 43001:2023. We have produced a short AI governance guide for financial services firms that can be downloaded here.

