Good visualization takes messy, high‑volume data and makes a decision obvious. That is the bar we set at (un)Common Logic. We build charts and dashboards not to entertain, but to inform specific actions: where to invest, what to fix, which experiment to run next, and when to stop doing something that no longer pays off. Over time we have learned that a handful of principled habits consistently separate visuals that drive outcomes from those that quietly collect dust in a bookmarks folder.
Start with the decision, not the dataset
Most bad charts begin with a dump of everything that was easy to pull. That approach tempts you into plotting whatever is close at hand instead of what the decision actually requires. We begin by naming two things in plain language: the decision at stake, and the time frame for that decision. Example: should we expand our paid search budget for non‑brand queries over the next quarter. With that frame, the chart almost chooses itself. You need a view of marginal cost per acquisition against capacity, not a collage of impressions, click‑through rate, and device splits.
I keep a scrap notebook of questions stakeholders actually ask. The entries are unglamorous and concrete: why did Tuesday sink, which audience is cannibalizing organic, where did the margin go after the promo. When the question is real, visual requirements sharpen. For a Tuesday dip, you want a time series with day‑of‑week banding and an annotation for a site event. For cannibalization, you need side‑by‑side indexed series to show relative movement, not absolute totals that confuse scale. For margin erosion, a waterfall chart across cost components is more honest than a pie.
If you cannot write the decision in a single sentence, you are not ready to design the visual.
Define the metric like a contract
At (un)Common Logic we treat metric definitions as version‑controlled agreements. You can pick the right chart and still fail if different teams compute the metric differently. Even a basic value like conversion rate can mean sessions‑to‑orders, users‑to‑orders, or clicks‑to‑leads. One client came to us with three dashboards showing three different conversion rates for the same campaign. Each was correct on its own terms, and collectively useless.
We put the definition on the canvas. Not in a tooltip, not buried in documentation. If the metric is a composite, we show the inputs and the calculation in a short subtitle or footnote. If sampling, filtering, or attribution rules apply, we disclose it. We also pin the denominator to the axis label when the risk of misinterpretation is high. A y‑axis that reads Orders per 1,000 sessions is specific and prevents a parade of Slack questions.
Precision beats mystery. Rounding can hide progress or overstate improvements. As a default, we keep one to two decimals for rates and basis points for very small changes when they matter. For currency, we match the audience. Finance wants cents. An executive may only care about whole dollars. The trick is to be consistent across a view. Mixing units is a fast path to confusion.
Context is not optional
A single sparkline without history can tell any story you want. We usually anchor charts with a baseline or a benchmark. That might be last period, a rolling median, a target line, or an external index. Context makes movement meaningful. A 12 percent growth rate sounds great until you realize the category grew 20 percent.
Comparisons work best when they are adjacent and aligned. Put series on the same scale if you can. If you must use a secondary axis, color it carefully and reinforce the mapping with labels on the series, not just the legend. We also like small multiples when the goal is to compare patterns across segments. Twelve thin, identical panels beat a single cluttered plot with twelve colored lines that cross like spaghetti.
Annotations deserve more use. We mark the day a price change went live, the date a tracking fix deployed, the span of a holiday. These notes do more than explain variance. They save hours of meetings. The audience sees the cause and effect and moves on to what to do about it.
Choose the simplest form that answers the question
There is no prize for novelty. Fancy visuals are the right choice only when the simple one cannot carry the load. Over time, a few patterns have earned permanent spots in our toolkit. They are boring and highly effective.
- Time series with reference bands for seasonality, showing current period against a baseline Indexed comparisons that start different series at 100 to show relative growth Waterfall charts to disaggregate change from one total to another Bar charts sorted by value for rank and distribution Scatterplots with a trend line and quadrants for portfolio decisions
The danger with elaborate visuals is that they create cognitive overhead. If you need a legend longer than two lines, stop. If you need a paragraph to explain how to read it, stop. The audience should decode shape and color in seconds. Reserve complex forms for explorations, not for the final communication.
Use color to encode meaning, not to decorate
Color is a tool, not a palette to show off taste. We adopt a stubborn default: gray for context, a single strong color for focus. When we add a second color, it is to encode a second dimension of meaning, not to brighten the page. The most common misuse we see is a rainbow of segments in a bar chart where rank matters more than hue. That forces the brain to do extra work.
Accessibility is non‑negotiable. Around 8 percent of men and a smaller share of women have some form of color vision deficiency. We test with a simulator and avoid red‑green decisions. Blue‑orange is often safer. We do not rely on color alone to signal outliers or states. Line style, dot shape, and direct labels help. High contrast between text and background improves readability for everyone.
Legibility is part of color practice. Saturated fills can hide gridlines and wash out labels. Pastels look modern, but they can fail in a projector or in a screenshot compressed for email. We test our palettes in grayscale to see if the message still works. If the story falls apart without color, the encoding was fragile to begin with.
Label with an editor’s precision
Labels turn data into statements. Direct labeling, where the value or name sits next to the line or bar, outperforms a legend in most cases. You eliminate eye travel and reduce errors. Legends belong in exploratory tools where the user needs flexibility. For a narrative chart, guide the reader.
We cut nonessential ink. Axis ticks are sparse and meaningful. Data labels appear only for peaks, troughs, and the most relevant points. We round with intent. For busy plots, we show totals and let the scale do the rest. Titles do not mumble. They tell the point: Mobile CPA fell below target after bid caps. A good title frees the viewer from hunting for a moral.
Footnotes matter. If there is a reason a value is missing, a count is lower, or a spike is a known artifact, we say so. That kind of honesty prevents chasing ghosts.
Respect scale, proportion, and zero
Nothing will erode trust faster than a compressed y‑axis that turns noise into narrative. When the variable is a quantity where zero has meaning, include zero. Revenue, orders, spend, and headcount live in that category. For rates and indices, zero may not be the anchor. A bounce rate change of three points looks flat on a 0 to 100 scale across a small panel. In that case, show change as a separate bar or a secondary small sparkline.
Log scales have their place, especially for data that spans orders of magnitude, like keyword volume or page load times with heavy tails. We label log charts clearly and never mix them with linear in the same series of panels. For percent changes, avoid the temptation to stack bars that imply additive relationships. Percentages are ratios. Stacked percent bars can hide important shifts in the middle components.
Proportion also applies to how many visuals you cram into a single view. A dashboard with nine panels of equal weight is a hierarchy failure. If one chart is mission critical, give it 60 percent of the real estate and demote the rest. Visual weight should mirror business weight.
Show uncertainty
Executives like crisp answers. Data rarely delivers them. We show uncertainty to build better decisions. Confidence bands around forecasts, shading for incomplete days, and error bars for A/B test outcomes keep optimism in check. We label models with training windows and last update dates. If a panel shows estimates, we say estimate in the subtitle and color labels slightly differently from observed values.
Forecasts that behave well in backtests can still surprise in deployment. We include simple model diagnostics off to the side in analyst views, like mean absolute percentage error over the last few weeks. That context powers better interpretation. It also encourages healthy skepticism, which is cheap insurance against overfitting a story to a single chart.
Build for the right altitude
A single source of truth does not mean a single view for every person. At (un)Common Logic we design for three altitudes: executive, manager, and practitioner.
Executive views compress to the fewest metrics that predict outcomes and risk. They fit on one screen without scrolling. Each panel is self‑explanatory and carries next steps. A spike in CAC above target triggers a callout that links to the manager view.
Manager views focus on allocation. They compare channels, products, audiences, and geographies. They carry filters, but not too many. We choose slices that influence budgets, staffing, or roadmaps. A good manager view helps answer what to do this week.
Practitioner views are tools, not reports. They answer how and why. Controls get heavy here because the user needs to isolate cohorts, test hypotheses, and debug anomalies. We build these with the assumption that the viewer knows the data model. That gives us room for technical labels, reference tables, and raw counts.
The mistake is to hand an executive a practitioner tool, or to hand a practitioner a vanity summary. Fit beats uniformity.
Reduce friction in the workflow
A beautiful chart that takes 40 seconds to load will die. We plan for latency. Pre‑aggregation, caching, and limiting default date ranges keep dashboards snappy. For high‑cardinality dimensions like queries or products, we index and store rank tables by period so we can render top movers fast. When we do need heavy queries, we load the most useful panels first and fade in the rest. Progress indicators reduce abandonment, which matters more than you think.
Naming and organization reduce friction too. We use clear folder hierarchies, stable URLs, and consistent parameter names. If a report moves, we set redirects. We also version dashboards and name them by intent. It is better to have Spend Efficiency Q3 than Master Dashboard v12. That background hygiene frees teams to focus on interpretation.
Treat explanations like product features
We narrate the first time someone opens a dashboard. A short explainer video, a quick guided tour, or a few tooltip tips lower the learning curve. Not everybody reads documentation. We design the first‑run experience like a product. Then we check analytics to see where users drop off. If most users never scroll to the bottom panel, we rethink the order or cut it.
We also use onboarding to set norms. For example, we state that incomplete days are shaded and excluded from week‑over‑week comparisons until noon local time. That one sentence prevents a recurring round of false alarms every morning.
Know when not to visualize
Some facts read better as a sentence than as a chart. A benchmark like Industry CPC for non‑brand rose 9 to 12 percent over the last six months across major networks is a single line that beats a dense column chart for many audiences. The medium should serve the message. We often write one or two lines directly in a dashboard above a panel to summarize the takeaways. Good annotation spares the team from reading the graphic cold.
There are other cases to skip the visual. If the sample size is too small to support a trend, say so. Do not plot a line for three data points. If the source data is in flux and likely to change materially, hold back until the system stabilizes. A wrong chart, seen at the wrong time, can linger in memory longer than a correction.
A brief vignette: the multi‑touch muddle
A retail client came to (un)Common Logic with a classic problem. Email, paid social, and organic were all claiming credit for an uptick in revenue after a spring campaign. Each team had a chart that proved its case. Each chart used a different attribution model. Meetings grew tense and circular. We started with a principle that earned buy‑in from everyone in the room: each model answers a different question, so we will show them side by side and label the question, not the model.
We built three panels. The first showed last‑click revenue by channel with a clear title, Who closed the sale. The second showed position‑based revenue, Who introduced and supported. The third showed an incrementality estimate from geo‑lift tests, Who moved revenue that would not have happened otherwise. We aligned scales, used the same gray context and one focus color per panel, and annotated https://ameblo.jp/zanderywdj326/entry-12962562152.html the period with promo dates and site outages.
Two things happened. The teams stopped arguing about whose chart was the right one because the questions were clear. And the executive sponsor could now make a decision grounded in trade‑offs. Paid social did not close many sales, but it played a valuable assist role and showed positive lift in test markets. We increased its budget with guardrails. Email kept credit for closing and focused on send timing to avoid cannibalizing organic. One visualization set, built on honest principles, created alignment without drama.
Quantify change responsibly
Percentages play tricks. A jump from 1 to 2 percent is a 100 percent increase and still might not matter to the business. We anchor percent changes to absolute impact. A callout that says Signup rate up 0.8 points, 400 more signups last week, moves the room faster than Up 67 percent. For financial metrics, we express changes in dollars where practical. Framing matters because people make portfolio decisions with limited budgets, not with unlimited appetite for percent gains.
We also discourage stacked comparisons across mismatched totals. Comparing click‑through rates for two ads with different impressions is fine, but stacking those bars can imply the same base. We prefer side‑by‑side bars with direct labels and explicit base counts in a footnote. If a metric can be gamed by changing the denominator, we call it out and often pair it with a balancing metric. For example, we show cost per add‑to‑cart alongside cost per purchase to reveal funnel friction.
Keep exploration separate from presentation
Analysts need room to play. Executives need crisp views. Mixing the two creates artifacts like 40 filters, eight legends, and a screen that reads like a cockpit. We separate exploration from presentation. The exploration lives in notebooks and sandbox dashboards where we test hypotheses and iterate quickly. When a story is ready, we promote a clean version to the presentation layer with the fewest controls needed to drill into action. That separation also speeds load time and eases maintenance.
We treat the jump from exploration to presentation as a release. We freeze metric definitions, document inputs, and run user tests with a handful of real stakeholders. Feedback loops are fast. We would rather ship a minimal, stable view and expand than overthrow a crowded layout that nobody quite trusts.
A short chart selection map for common questions
- How is performance trending: a time series with a baseline band, plus a small multiple for key segments Where did the change come from: a waterfall between two totals, with components sorted by contribution What should we prioritize: a scatterplot with impact on the x‑axis, effort or cost on the y‑axis, and bubble size for volume Which variants are winning: a bar chart with confidence intervals, ordered by uplift Are we cannibalizing: indexed lines beginning at 100 for overlapping products or channels
Small details that carry weight
We sweat details that sound fussy until they save a quarter. Here are a few that recur.
Time zones: pick one per dashboard and print it near the title. Mixed zones quietly wreck comparisons.
Partial periods: shade them and exclude them by default from comparisons. If you include them, say why.
Week definitions: some teams run Sunday to Saturday, others Monday to Sunday. Set a rule and stick to it.
Currency: show the currency symbol, and if you mix currencies across regions, convert or separate views. An unlabeled dollar is an error waiting to happen.
Index starts: define your index anchor clearly. If you say Day 0 equals campaign launch, ensure every series starts there.
Tiny rules prevent big mistakes.
Performance and scale without drama
Charts must perform under load. We test with real production volumes, not toy samples. If a plot fails with 20 million rows, it fails, period. We build rollups at daily or weekly grains for historical views and keep raw, high‑granularity data behind drill‑throughs where only analysts go. We prune expensive transforms out of the live layer. When a calculation is stable and used widely, we materialize it.
We also plan for snapshots. Historical accuracy matters in marketing and product analytics. If a partner retroactively fixes attribution or a feed reprocesses, you can end up with moving targets. We snapshot daily aggregates so the past stays put. Reproducibility is a user experience feature, even if the user never sees the machinery.
Testing visuals like features
A visualization is a product in miniature, so we test it. We run hallway tests with three to five people who were not involved in building it. Each has 60 seconds to tell us what the chart is saying and what they would do next. If their answer diverges from intent, we adjust labels, scales, or form. That cheap test catches issues before they calcify.
We also monitor usage. A dashboard that nobody opens is not a success. We log views, dwell time, scroll depth, and common filter combinations. If a panel never gets attention, we ask why. Maybe it belongs in the practitioner view. Maybe it should go away. Ruthless pruning keeps the signal strong.
A practical review checklist before you ship
- State the decision and the time frame in the title or subtitle Put metric definitions and denominators on the canvas, not just in docs Check color contrast, and verify the story holds in grayscale Verify axis choices, label directly, and avoid legends when possible Annotate known events, show uncertainty, and shade incomplete periods
Governance without bureaucracy
At (un)Common Logic, governance means shared standards that make collaboration easier, not red tape that slows work. We keep a lightweight style guide with examples, color palettes, typography rules, and preferred chart forms. We store it with living code snippets for common visuals so analysts can assemble consistent charts quickly. New team members learn faster, and stakeholders do not have to relearn the language of data every time the author changes.

We also audit dashboards quarterly. The audit is not about blame. It is about fitness. We ask whether the dashboard still answers the decisions it was built for, whether metrics have drifted, and whether controls match the current org. Sunsetting is a healthy practice. Every retirement is a small gift of attention back to the teams.
Ethics and honesty
Visuals carry power. They can nudge choices, build pressure, and create confidence. With that comes responsibility. We do not hide caveats in footnotes when stakes are high. We do not compress scales to dramatize flat trends. We do not cherry‑pick periods to flatter a campaign. We would rather deliver bad news cleanly than delay a corrective action. That ethic earns trust, and trust keeps stakeholders coming back to the data even when it hurts.
One habit helps: show the alternative view. If there is a plausible second interpretation, include it alongside the primary. That disarms allegations of bias and models the kind of responsible skepticism we want across the organization.
Closing thought
The best compliment a visualization can earn is short and direct: this helped me decide. Getting there is less about artistic flair and more about a sequence of disciplined choices. Start with the decision. Define the metric. Add context. Choose the simplest form. Use color with purpose. Label and annotate with care. Respect uncertainty. Fit the view to the altitude. Reduce friction. Test like a product. Govern lightly and ethically.
At (un)Common Logic we return to these principles because they work. They speed decisions, reduce noise, and turn data into a partner rather than a puzzle. And when a stakeholder opens a dashboard on a busy morning, sees a clear story, and knows what to do next, all the quiet work behind the scenes was worth it.