Trust Before Autonomy: Building Agentic AI You Can Actually Trust

By: Multifamily Weekend Update
At RETCON,
Vijay Anand of
MRI Software delivered a clear message to the industry: the next wave of AI isn’t just about capability: it’s about trust.
Agentic AI is quickly moving beyond chatbots and copilots into systems that can take action across leasing, finance, maintenance, and customer service. Anand described the moment as a turning point. “We’re at an inflection point,” he said. “In the last six to nine months, agentic AI has gone mainstream.”
But autonomy introduces real risk. Anand warned that once AI moves from assisting people to acting on their behalf, organizations must confront financial, legal, and operational consequences. “These are probabilistic systems,” he said. “They can hallucinate. They’re never going to be 100 percent.”
MRI has been preparing for this shift for years. Anand explained that the company established a Responsible AI framework more than three years ago built on accountability, fairness, transparency, reliability, and security.
Those principles now guide systems already in production. “We have about 20,000 monthly active users of AI features,” he said, noting that MRI has also processed more than two million leases using AI.
Anand emphasized that companies shouldn’t rush directly to full automation. Instead, they should follow a maturity path: AI that assists, then AI that proposes actions for human approval, and only later AI that automates. “For your first workflow, it probably needs to be in the approval phase with a human in the loop,” he explained
.
No matter the workflow, strong guardrails must be built directly into the architecture of agentic AI systems. One of the most important is identity and permissions, ensuring that agents inherit the same access rights as the user invoking them. As he explained, “If a property manager cannot approve a budget, the AI agent acting on their behalf shouldn’t be able to either.”
Beyond permissions, companies must carefully control tool actions, recognizing the difference between AI that reads information and AI that writes to systems of record. Once an agent can send emails, adjust financial ledgers, or issue work orders, the stakes rise significantly and human oversight becomes critical.
Finally, auditing and monitoring are essential. Organizations must maintain clear logs showing what actions agents performed, what data triggered those actions, and which systems were accessed, ensuring that any issues can be traced and resolved quickly.
Human governance will continue to play a critical role. MRI operates a cross-functional AI advisory council that reviews new AI initiatives before they’re built and again before they’re deployed. “We ask teams what data they’re using, what models they’re using, what risks exist, and how they’ll mitigate them,” Anand said.
His final takeaway was simple but powerful. The industry has long embraced “secure by design” in software development. For AI, Anand believes the new principle must be just as clear: “Trust by design.”
Recent Posts


