Quick answer: A growing share of Australian solar buyers ask ChatGPT, Perplexity, or Google AI Overviews “who is the best solar installer in [city]” before they fill out a quote form. Getting cited by those tools in 2026 is a discipline distinct from classic SEO and requires three things: clean, factual, well-structured content that AI crawlers can parse without ambiguity; aggressive earned-mention strategy across third-party authoritative sites; and machine-readable trust signals (schema, llms.txt, accurate NAP, and verifiable claims). Installers who set this up in 2026 are building a moat the laggards will not catch for years.
The behavioural shift is real and faster than the SEO industry expected. Per Statista’s ChatGPT usage data, monthly ChatGPT users crossed 400 million globally by late 2025 and a meaningful share use it for local service research. Google’s AI Overviews now appear above the organic results for a rising percentage of commercial-intent queries. Perplexity’s growth among professional researchers is significant. Solar buyers, particularly the considered $15,000+ battery buyers, lean into these tools more than the population average.
How is AEO different from SEO for solar?
Classic SEO ranks pages for a query. AEO (answer engine optimisation) and GEO (generative engine optimisation) get your business named inside a generated answer. The mechanism is fundamentally different. Google’s AI Overview composes a response from multiple sources and cites a subset. ChatGPT pulls from training data plus retrieval-augmented web sources. Perplexity cites visibly and links out. None of them work like the ten blue links you optimised for in 2018.
The implication is that AEO is more about being mentioned than about being ranked. A solar installer can be cited in a ChatGPT answer without their own website appearing high in Google. Conversely, a top-ranked Google listing can be ignored by AI tools entirely if the content is hard to parse or the trust signals are weak. The two channels have meaningful overlap but they are no longer the same channel.
The other shift is that AEO rewards content that directly answers questions in clean, short, declarative sentences. Long, hedged, content-marketing-style paragraphs are harder for language models to extract. Solar pages that lead with “The average 6.6kW solar system in Sydney in 2026 costs between $5,200 and $7,800 after STC rebates” get cited. Pages that lead with “Welcome to our company, we have been serving the community for 15 years” do not. We unpacked the framework in our broader answer engine optimisation services page.
What do AI tools actually look for when citing a solar installer?
AI tools cite sources that look authoritative, fresh, specific, and free of contradictions. For solar specifically, that means Clean Energy Council Approved Retailer status mentioned and verifiable, dated content (2026 prices, not “current” prices), explicit locations, and consistent NAP across the open web. Models penalise inconsistency invisibly. A business whose phone number differs between Google Business Profile, Facebook, and the website is treated as less reliable, full stop.
Third-party mentions are the other half of the equation, and they are the harder half. ChatGPT and Perplexity heavily weight authoritative third-party sources that name your business.
A mention in RenewEconomy, SolarQuotes, the local paper, or an industry publication outweighs ten of your own pages. Building an earned-mention strategy is now a core AEO discipline. Press releases on PR distribution networks are largely ignored. Genuine editorial pickups, podcast appearances, and contributed expert quotes work.
The schema layer matters more than most SEO teams realise. LocalBusiness schema, Service schema, Person schema for your founders and engineers, and proper Article markup all give language models structured signals they can use confidently. A page with weak schema can still rank in Google but is materially less likely to be cited in an AI answer.
How does llms.txt fit into the picture?
The llms.txt proposal is an emerging standard, similar to robots.txt, that gives language models a curated entry point to your site. A well-built llms.txt lists your most important pages with short descriptions, hints at canonical answers to common questions, and gives the model context that a generic crawl would miss. Adoption is still early in 2026 but is growing rapidly, and the cost to publish one is essentially zero.
For solar specifically, llms.txt should point to the pages you most want cited: your service area pages, your battery and finance explainers, your published 2026 pricing benchmarks, and your case studies. Treat it as a curated reading list for an AI researcher.
The competing major language models do not all read llms.txt yet, but the ones that do reward the implementation, and the cost is low enough that the asymmetric upside makes it a default recommendation.
The same principle applies to your robots.txt and crawler policy. Blocking GPTBot, ClaudeBot, or PerplexityBot is a defensible choice for some publishers but a self-inflicted wound for a solar installer trying to be cited. OpenAI’s GPTBot documentation and equivalent docs from Anthropic and Perplexity are the canonical references for what to allow. The default for a service business in 2026 should be to allow all of them and to monitor your citation rate.
What content format gets cited most often?
The content that gets cited most often follows a pattern: short direct answer paragraph at the top, structured sub-answers with H2 questions, dated figures with clear sourcing, and lists where lists are genuinely appropriate. This pattern is not coincidentally similar to what wins featured snippets in classic Google search. The same content engineering principles apply with a higher emphasis on factual precision and a lower emphasis on long persuasive prose.
For solar, the highest-citation-rate pages are typically: state-specific cost benchmarks (“Solar prices in NSW 2026”), rebate explainers, battery-specific cost and payback pages, and comparison content (“X system size vs Y system size for a 4-bedroom home”). These pages map to the exact questions buyers ask AI tools. Generic homepage content rarely gets cited because it answers no specific question well.
The format mistake to avoid is gating the answer behind a quote form. AI tools cannot fill out forms. Content that hides the actual numbers behind “request a quote to find out” is, functionally, invisible to AEO. The instinct from sales teams is to gate everything. The right move in 2026 is to publish the directional answer, then offer the quote as the precise version. The lead flow improves rather than worsens because trust precedes contact.
What we have seen: on an Adelaide installer who came to us in February 2026 with strong classic SEO but zero presence in AI Overviews and ChatGPT, we restructured their three top-priority pages around AEO patterns, added llms.txt, fixed schema, and ran a three-month earned-mention push targeting four trade and renewable-energy publications.
By May, the installer was being cited in 2 of 3 sampled ChatGPT prompts for “best solar installer Adelaide” and appearing in roughly 40% of relevant AI Overview impressions. Lead volume from “ChatGPT sent me” attribution (a question we now add to our intake form) reached 9% of new leads in the third month.
How do you actually measure AEO results?
The measurement layer is the weakest part of the AEO discipline in 2026 because the platforms expose less data than Google Search Console. The workable approach has three parts: prompt sampling (running the same set of buyer questions through ChatGPT, Perplexity, and Google AI Overviews weekly and tracking citation rate), source-of-truth lead capture (“how did you hear about us” with AI tools as explicit options), and brand search volume monitoring in Google Search Console for downstream lift.
Prompt sampling is unglamorous but reliable. Define 20 to 40 buyer-intent prompts (“best solar installer Sydney”, “10kW solar quote NSW”, “battery installer Geelong”), run them weekly, log who gets cited, and chart the trend. The list itself becomes a proprietary asset. The dashboards do not exist yet at the platform level, so building your own from this data is the only honest path forward.
The brand search lift is the leading indicator that matters most. When buyers see your business named in an AI answer, a percentage of them search your brand directly on Google to verify. That brand search trend, available in Google Search Console, is the cleanest aggregate signal that AEO is working. If brand search is climbing while paid spend is flat, AEO is contributing. If you want help designing a measurable AEO program, our AEO team works specifically with solar installers.
The takeaway
AEO is the most under-built channel in Australian solar marketing in 2026, and the gap will close fast. The installers who restructure content for citation, publish llms.txt, earn third-party mentions, and measure prompt-level citation rate will be the named answer when a buyer asks ChatGPT next year. The ones who keep optimising for ten blue links will be invisible in the channels where the next generation of solar buyers are increasingly making their first decision.