4 Comments

This is a great write-up. Can you say more about this in the footnote:

If self-serve analytics is possible in your org, as it has been for some of mine, having better infrastructure allows your stakeholders to use data for more decisions, quicker - this is an indirect way for data to help bear risk.

It is inferior to the more consultative direct ways for data to work with stakeholders, but oftentimes you aren’t trusted to do this without getting the basics right.

How do you define self-serve analytics being possible? Are you speaking from a technical perspective, people perspective, or both? Neither? How much time might you suggest building self-serve capabilities vs doing "analysis" projects to drive insight? It seems the latter would be more valuable, but also more time consuming.

Expand full comment

So self-serve analytics works for part of the analytic workloads which can be answered within the scope of a semantic layer. Offering tools like Looker and Lightdash have enabled this in some orgs. I don't think a tool which allows for 100% self-serve exists or will ever.

I see that this investment in self-serve analytics allows more time for analysts to be able to do bigger and more complex analysis projects.

Expand full comment

I've used Looker in my current and previous role. I think one of the traps teams can fall into is continually delivering more and more data to Looker without stepping back to understand what decisions, if any, are being made with that data. It seems like at some point you need to outline what will/won't be part of self-serve and then create a strategy for how other analytical projects are staffed.

Expand full comment

Yes definitely. What is possible to be solved by those tools should have a limited scope to prevent every possible report being enabled.

Some work needs to be done in a more manual explorative way by analysts. They can have more flexible tooling to do this like Count, Hex or whatever else.

Expand full comment