There are several cross-functional, quantitative problems that Program Managers are often called upon to address. Problems include management of sequence and schedule; prototype build logistics; project, portfolio and organizational finance; personnel project assignment and balancing; and bugfix triage; to name a few. PMOs certainly must provide people and methodologies capable of handling these problems. These are large problems with large numeric content, that are often connected to dates, affect operational behavior, and are subject to big-league visibility and review. Numbers are not the only thing about Program Management, but a having a solid quantitative grasp on managing these numbers leads to better decisions, better direction, better cross-functional alignment, effective agility, and will surely lubricate most projects.
Corporate and Commercial Tooling
Although tempting, corporate tools like MRP are aimed at revenue production and are therefore tightly managed, hostile to fluid prototype structures and non-product items like test, internal systems, services and partner expenses; and they impose controls that add delay. More tools are available for Agile project management. But tools available to Program Managers pretty much operate each in its own black hole, without facility to interact with tools from other suppliers or in different problem domains. Project tool structures are inevitably tied to structure of the specific developed product, and to organizational structure and processes within your company. At the same time, project-level support from IT, or any other entity, to customize commercial tools or to develop custom tools for Program Management, is exceedingly uncommon. So when faced with these problems, most Program Managers turn to a spreadsheet – not many alternatives available. Program Managers are famous for “Big Spreadsheets”. “Everybody knows Excel!”… right? Needless to say, some of these efforts work better than others! I’ve probably tried them all… here are some design patterns.
So down the left margin, row labels for each Item to be built or purchased; rows labeled and sorted by organization, usage, and maybe system-ID. Across the top of the page, headings labeling columns for each Build, maybe with dates for each Build copied from the MS Project task-sequence plan. Within each column: for each Item row, the quantity to be built/purchased/used in that build by the organization/usage/system-ID listed for that row; and maybe its cost, config notes, etc. Then next Build in the next column, etc. In specific rows, subtotals are embedded in each column to sum up items for each organization, each Build. Subtotals may use =SUM Excel formulas or similar, using $A$1 cell address notation. Higher-level totals might similarly sum from lower-level subtotal rows.
This common structure captures and concentrates a quantified view of the program, certainly a useful tool – but what are its limitations? Two principal ones: it primarily supports a single view; and it’s pretty brittle, subject to breakage, and not very expandable. Organized by Build, it might be hard to get a report by fiscal quarter or a report by element type. Because of the cell addressing, changes to builds, product structure, or subtotals often require major surgery. It’s pretty much guaranteed that Development will make product structural change, new builds will be needed, and Finance is going to want new ways of viewing expenses. The structure becomes increasingly brittle as these are accommodated, which inevitably leads to breakage. It can take quite some effort to create new subtotals to produce a new financial view or updated operational view; and then to maintain those as later situations require further change. Needless to say, these are limitations that restrain effective scaling into large or complex projects.
An improvement to this design is to structure the problem as a linear Big Table, implemented as rows of cells down a worksheet, each row describing an item allocation in a build; then using that table to drive Pivot Tables to create views with subtotals for different sort-orders. This improves flexibility of views and lowers breakage potential, but retains some limitations. Creative work is needed to coerce subtotals in the pivots from source rows that are distributed throughout the structure; and structural change to builds or product may require substantial surgery to core logic among rows, and then also to logic needed to drive pivot subtotaling.
The structure can be further improved by using Excel Structured Tables, with Structured References, instead of cells with $A$1 addressing. Structured References use [@[fieldname]] addressing, which is more robust than $A$1 addressing because it makes Excel use the structure to find values, rather than the table-designer specifying hard cell addresses. Structured Tables are scoped to the workbook, so can be referenced from any worksheet without knowing its location. So separating common data into separate tables, to be pulled into top-level table by VLOOKUP, INDEX/MATCH/MATCH, SUMIF, XLOOKUP/XMATCH functions, further reduces breakage and eliminates petty errors in repeated data. These are pretty big improvements; but this approach is still susceptible to major surgery among many records (rows), in order to accommodate structural change to product structure or build structure.
To further refine the Big Table approach to further force Excel to find its way around and to reduce major surgery: instead of a focal Big Table, the project structural data can be normalized into several tables each describing a subset of the overall problem (example tables for Logistics problems are identified below in the “PowerOpI: PowerQueryTool” paragraph); and from that, database JOIN functionality can be used to dynamically incorporate structural combinations, automatically creating the Pivot Cache containing a Big Table describing and structuring the overall program. Each of the sub-tables is built as above, using Excel Structured Tables, Structured Reference, INDEX/MATCH/MATCH, SUMIF, XLOOKUP/XMATCH.
Normalization basically means tables are constructed of rows independent of each other, each row (record) specifying a key-value pair: a possibly-complex key value, identifying a data value for a single item such as an item to be built or bought. The Keys within each Table are used to connect each record to related data among the tables. The tables are combined and exploded into a data structure that describes the full program, using SQL or M to perform a JOIN operation among the keys. The result of such a JOIN is itself an automatically-created big table that drives Pivot Tables and other reports. By full program I mean aggregated data points organized into an ordered structure that quantifies product structure combinations combined into program execution structure. For large or complex programs, this method can represent program structural combinations beyond what manual human work can reasonably imagine and maintain. This kind of operation updates in seconds, providing a realistic basis for continuous and agile quantitative program plan update and execution.
JOINs can be pretty straightforwardly implemented using PowerQuery, MS Access, and SQL Server; each of which also leads to more database environments. Details on how to use these are discussed in stultifying detail on my website at www.softtoyssoftware.com. PowerQuery takes very little setup but somewhat constrains distributed use. MS Access requires a little setup, and expands distributed use, security and maintenance focus. SQL Server requires substantial setup but expands distribution, platforms, linkage, security, scalability, and focused maintenance. All can run on your computer, or in various distributed configurations. The tool I will make available, as discussed below, provides operational automation for this functionality, and templates for use in several dimensions.
This JOINed-Table structure leads to using built-in capabilities of Excel, to find its own way around the program data based on characteristics of the data, rather than depending on manually-created rigid addressing – I never use $A$1 addressing any more. Many views for operational and formal use can be created that are self-updating as described in changes to data, even including changes to product and operational structure. Automatic connections to dates, and to visualizations, are possible, so reports provide quantified management direction for date-driven operational processes. Data referential integrity is continually preserved from schedule to materials to finance, and into planning, operations, and reporting – so the whole organization stays aligned even through change. Actuals are captured and retained, useful in guiding quantification and improvements for future projects. Change incorporation is focused, rippled through tool components by automation, not humans, less susceptible to human error or omission. Repeatability through automation simply leads to fewer errors.
The program manager and core team can create these Excel data structures manually using this path described and referenced in this article, without background needing tens of thousands of dollars plus several elapsed years of data science training. So you can develop this yourself, using Project, Excel, Visio and optionally Access or SQL Server, by following the hints here and specifics on my website. I’ve developed this working path for Microsoft tools; other environments may also be possible, that are outside my radar – the Microsoft path likely requires the least training though, so is most broadly usable. I can pretty much guarantee you will learn skills you will find most useful. It does require some geek-iness on your part, involving acquisition of a taste for data structuring, Excel, and JOINs. Knowing Pivot Tables, VBA and SQL will help; but I started from zero. You know who you are.
“PowerOpI” is my term for “Power Operational Intelligence”, meaning operational quantification directing processes throughout the organization, including but not limited to business summary (“Business Intelligence”, or BI). The “PowerQueryTool” I’ll make available is free, and provides infrastructure aimed at Program Management but could also be used for other problems. It’s aimed at helping Program Managers handle the quantitative problems described above, incorporating a lot of what I learned. This version is based on PowerQuery, requiring very little setup. It’s two generations beyond earlier versions I developed and used for the biggest projects I’ve managed, and scales seamlessly from small projects to large. It connects to schedule data from MS Project, and to visuals created in Visio that update automatically to reflect data changes – you saw examples in my post Complex Programs: Structure. PowerQuery is inherent to many versions of Excel (via Excel Menu “Data” > Get & Transform Data > Get Data > …), and provides a user interface within Excel. It also extends to MS Power BI Desktop.
This PowerQueryTool provides templates for logistics projects: table structures for allocation of built and purchased equipment; product structure to support allocation of components, sub-assemblies, assemblies, and rack systems; date import and manipulation to support item build, purchase and integration cycles; and connection to element configuration and logistics (suppliers, distributor, RFQ/PO/Quote/Delivery/Distribution) for task execution. The templates illustrate how data is structured, and linked among tables. Although specific templates are provided for logistics, you can extend them to support for other problems that you can express using tables. Templates for data Tables, QUERYs, and reports are further described on my website.
The PowerQueryTool provided supports the Excel Power Query user interface, although I recommend using the more advanced capability of the M function language through its Advanced Editor. I can’t provide personal deep support, but my website at www.softtoyssoftware.com provides tons of descriptions and references on how all this works, and should give a good running start. Documentation for M is pretty scarce but I provide linked references, and enough templates that cover important cases to run projects at least as involved as the ones I’ve managed.
Beyond the PowerOpI PowerQueryTool, the overall structure supports SQL with MS Access and SQL Server, and SQL embedded in Power Query M. These extended capabilities are available, although not included in the base PowerQuery “demo” version. The external database configurations support increased flexibility for distributed operation; stored procedures in the database that improve security and maintenance; and greater ability to scale shared servers. These need a consulting arrangement, so out-of-scope for this blog.
Before I make the PowerQueryTool available, in this post I wanted to show where the methodology and tools are going, and how a PM can develop their own solutions using the capabilities I’ve explored, using methods described and linked on this page, with or without the tool. The direction I’m describing here works, and if it fits your problem, you can implement it directly. Then, when available, the PowerQueryTool will take you farther and faster.
I think if you’ve encountered some of the quantitative topics described above, you’ll recognize the challenges for your projects, and the potential to solve them in an agile fashion using the methods I’ve described. Just knowing that these solution paths exist should help – that eliminates a bunch of risks, big bets, and Day One problems I resolved along the way. The paths described are what works well and and has solved problems or bypassed them, and given me capability to handle some big challenges. You can do them manually by learning the skills I’ve mentioned. And then, those are incorporated into the free demo so there’s more leverage there. Then further, the direction continues beyond that version, with few limits that I can see! So I hope my learnings can give you power useful to manage your projects, and that the PowerQueryTool can take you even farther.