January 25, 2025

Maximize Office Insights

Easier Work, Maximum Results

Three Mostly Vacant Project Management Office Positions AI Can Augment

Three Mostly Vacant Project Management Office Positions AI Can Augment

Distinguished Analyst and Research Fellow, Info-Tech Research Group. Insights and recommendations for projects, portfolios and change.

In my previous article, “How An AI Audit Can Jump-Start Your Project Portfolio,” I introduced the concept of using AI to audit IT project portfolios with a level of objectivity that can supplement human judgment, which is often influenced by external factors.

Over the past two decades, IT leaders have focused on securing project approvals and assembling top talent to drive success. However, many organizations have struggled to manage the volume of projects, overwhelming their capacity. As a result, individual projects often falter due to sheer overload.

The AI audit can provide an early warning signal for projects and portfolios that are falling short of expectations by using a well-written, unambiguous policy document as the basis for practitioner interviews.

PMO Roles That Are Suitable For AI

A common challenge in IT governance has been the tendency to apply short-term thinking in a long-term context. We’ve prioritized the immediate satisfaction of getting projects approved while neglecting the long-term risk of diluting shareholder value by taking on more than we can effectively execute.

I’ll skip the broader discussion on the obligation to maximize shareholder value. Since the Dodge v. Ford Motor Co. decision in 1919, the principle of shareholder primacy has been clear: “A business corporation is organized and carried on primarily for the profit of the stockholders.” While recent years have elevated the importance of employees and customers, this shift does not justify the unchecked approval of projects beyond what can reasonably be managed.

With that in mind, here are the three roles I’m augmenting with AI Audits and how they can be approached effectively. While you might classify these as governance and compliance roles, I believe they’re better described as internal checks designed to encourage greater diligence. These roles are:

Portfolio Auditor

This role leverages AI to assess how well the PMO director is stewarding the corporation’s human and financial resources. Key questions to explore include, “Are efforts to minimize waste proportionate to the scale of spending?” and “Are projects aligned with strategic objectives and is capacity planning realistic?”

Project Risk Auditor

The focus here is on identifying and mitigating risks at the portfolio level, with AI assisting in pinpointing vulnerabilities. Ask the interviewee, “How are risks from shared resource pools, high staff turnover or unstable vendor supply chains being tracked and addressed? Are contingency plans in place for volatile strategic environments?”

Change Auditor

This role ensures that completed projects deliver their promised value and that lessons learned are communicated effectively. Questions to guide the audit include, “How are post-completion benefits being measured against forecasts?” and “Are mechanisms in place to ensure continuous improvement in project delivery and outcomes?”

You’ll notice that I called these roles “auditor” rather than “manager.” This is because generative AI falls far short of autonomously managing people or processes—a topic I could discuss endlessly. However, what it can do, quite effectively, is guide a skilled individual. With the support of a well-crafted policy document, a common generative AI tool can help identify waste, risks and communication gaps.

Consider these critical questions: Is there an engaged sponsor taking responsibility for the project’s costs and the realization of its forecasted benefits? Are you measuring the actual benefits achieved? Do you even have forecasted benefits? And, is the plan both realistic and achievable?

Far too often, these questions are either overlooked or asked without getting a confident “yes” as the answer.

The Process For Developing AI Agents

Let’s look at the AI agent development process in two dimensions:

First, the policy document needs to be actionable, consistent and unambiguous. If 10 people read it, there should be one interpretation. It should avoid euphemisms like, “We encourage a culture of strong sponsorship” and instead be more clear. Something like, “Sponsors are accountable for the forecasted costs and benefits of their project proposals” is unambiguous.

Most importantly, your policy statement should be universally enforceable. Just as social scientists have found that replacing broken windows in abandoned buildings decreases the likelihood of more broken windows, having some unenforceable statements in your policy document renders the entire document difficult to enforce.

The second key element is the AI prompt itself. In my initial experiments, I primarily used ChatGPT Personal Plus and Microsoft Teams Copilot within my organization’s Microsoft 365 environment. Of course, no sensitive information was shared during these tests.

A coworker created an internal AI prompt builder that handles technical tags and formatting. My first prompt serves as a “System” prompt, setting the overall context and defining the AI’s role. For example, it might say, “Imagine you are a project and portfolio management expert. Your task is to conduct an interview, stay on track and ask follow-up questions as needed.”

Next, I provide more specific instructions in the “User” prompt. For instance: “Conduct an interview based on my audit standard. Rephrase questions as needed to ensure clarity. Then give me a written summary of your findings.”

The core idea is to use AI to guide users through enforceable policies, simplifying the process so the AI does the heavy lifting. This transforms proactive project governance—often seen as tedious—into a reactive process of simply responding to well-crafted questions.

It’s easy to envision AI assistants like this becoming deeply integrated into applications. Eventually, users will likely have intuitive interfaces for customizing how they interact with these agents.

Overcoming The Objections

The primary concern we hear about this idea is enforceability, which is entirely valid in organizations that have either deviated from their documented standards or lack written standards altogether. In such cases, this process could deliver significant value simply by encouraging the reinstatement of clear, enforceable guidelines.

The more common objection, however, comes from those who feel they don’t have the time to review their project and portfolio policies. Often, the high-pressure environment of constant approvals leaves little room for reflection on the broader portfolio’s feasibility or alignment. These concerns, while valid, tend to fade during periods of tighter budgets or spending constraints.

Conclusion

I’m not ready to hand over control to anybody’s AI bots at this point. However, I’m more than ready to have an AI auditor adopt the persona of a shareholder and ask if we’re acting responsibly.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


link