Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/ Thomson Reuters Institute is a blog from Thomson Reuters, the intelligence, technology and human expertise you need to find trusted answers. Mon, 06 Oct 2025 16:00:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 AI evidence in jury trials: Navigating the new frontier of justice https://www.thomsonreuters.com/en-us/posts/ai-in-courts/ai-evidence-trials/ Mon, 06 Oct 2025 15:58:09 +0000 https://blogs.thomsonreuters.com/en-us/?p=67834

Key highlights:

      • AI evidence creates a credibility dilemma for juries — Jurors are prone to treating AI outputs as factual and authentic, which makes it difficult to distinguish between legitimate evidence and sophisticated deepfakes.

      • Current evidentiary rules are inadequate for AI — Traditional rules of evidence may not be compatible with AI’s ability to create hyper-realistic fabricated content and authenticate evidence.

      • Courts need proactive measures and adaptability — To navigate this new frontier, courts must implement comprehensive jury instructions, build boards of AI evidence experts, provide red flag training for all participants, and develop flexible legal guidelines that can be adapted to the rapid evolution of AI technology


AI evidentiary issues are presenting multi-layered challenges in the United States court system. To deal with them, courts must perform a careful balancing act: Embracing helpful technology while guarding against potential deception from AI-generated evidence, especially in jury trials.

Several issues make this balancing act a challenge. First, individuals — including those in juries — tend to treat AI outputs as authentic and factual. “We know that people often treat artificial intelligence, [and] the outputs that they receive from there, as factual, and that inflates credibility across the board,” says Jawwaad Johnson, Director of the Center for Jury Studies and Principal Court Management Consultant at the National Center for State Courts (NCSC). “We know that the artificial intelligence itself has become more believable.”

Second, audiovisual testimony can be more memorable in a juror’s mind than written testimony, according to Dr. Maura R. Grossman, a Research Professor at the University of Waterloo (Ontario) and an eDiscovery lawyer and specialist. And that can irreversibly influence juries, particularly if the audiovisual testimony is fabricated, because this reality makes deepfakes hard to unsee, Grossman explains.

The Honorable Erica R. Yew, of the Santa Clara County Superior Court in California, agrees that the novelty of this development can be a challenge. “In the past, video has been used to refresh someone’s recollection when they forgot something,” Judge Yew says. “And so now we are worried about videos being used to modify or change or corrupt someone’s memory.”

Finally, the liar’s dividend risks juries dismissing legitimate, properly authenticated evidence simply because AI manipulation is possible. “If we overdo it, we are going to make our jurors so skeptical of everything, and they will become cynical and question all evidence, even legitimate evidence,” Grossman says. “But if we give them no guidance, we certainly do not want them pulling out magnifying glasses” to ascertain authenticity on their own using ad hoc methods.

A recent webinar hosted by the National Center for State Courts and the Thomson Reuters Institute AI Policy Consortium, looked at how courts can navigate the complexity of these psychological and technical challenges. The webinar panel included Judge Yew, Grossman, Johnson, and Megan Carpenter, Dean and Professor of Law at the University of New Hampshire Franklin Pierce School of Law, as the moderator.

The panel discussed how AI can impact evidence that is both acknowledged and unacknowledged. For example, acknowledged AI-generated evidence can enhance expert testimony and improve juror comprehension, such as in accident reconstruction. This is only possible when AI methods are transparent and there is clear chain-of-custody of the evidence.

Unacknowledged AI-generated evidence, on the other hand, can include deepfakes and other falsified evidence that are intended to deceive and may be hard to detect. Courts and lawyers must balance skepticism, disclosure, and expert input in these cases to protect juries without paralyzing them.

Limitations of current rules of evidence

Panelists also laid out options for practical legal frameworks that govern how acknowledged and unacknowledged AI-generated evidence can be admitted and evaluated in courtrooms. Different rules apply, of course, depending on the type of evidence. More specifically, in matters of acknowledged AI-generated evidence, validity, reliability and bias are primary considerations; and in matters concerning potential unacknowledged AI-generated evidence, the primary issue for judges and juries is to determine authenticity.

The current rules regarding evidence may not always be compatible with AI-generated content in several critical ways. Traditional authentication rules under Federal Rule of Evidence 901 assumes that evidence originates from reliable sources; however, given AI’s ability to create hyper-realistic deepfakes that are indistinguishable from authentic content, this assumption may not always be correct. In addition, current self-authenticating document provisions in Rule 902 may inadvertently admit fabricated evidence that has been processed through official channels, such as AI-generated documents filed with government agencies that then become official records.

Further, the technical sophistication of generative AI compounds these challenges. AI systems use a training method in which two algorithms compete to get better, which makes it harder to detect them, Grossman explains, adding that current automated-detection tools often fail in these cases, and even human experts can only provide probability assessments rather than definitive determinations of authenticity.

In her work with Judge Paul W. Grimm, Grossman has proposed two key reforms. The first, related to acknowledged AI-generated evidence, seeks to incorporate aspects of the Daubert standard into current evidentiary rules. Instead, the Federal Rules Advisory Committee decided to draft a new rule — proposed Federal Rule of Evidence 707 — which applies the expert reliability standards found in current rules to machine-generated evidence offered without expert testimony. This solution ensures AI-generated evidence meets the same reliability requirements as expert testimony while also addressing concerns about validity, bias, and methodological soundness.

In scenarios involving unacknowledged AI-generated evidence, such as in cases in which the authenticity of the evidence is disputed, Grossman and Judge Grimm proposed that if there is evidence that a jury could reasonably believe that suggests that the evidence is and is not authentic, judges should use a balancing test, which weighs how much the evidence actually helps prove something important in the case (known as probative value), against the risk that it will unfairly inflame, mislead, confuse, or waste time (referred to as the prejudicial value).

What can courts do now

Courts must take critical steps to deal with AI evidence in jury trials as it becomes increasingly sophisticated. The NCSC’s Johnson cites the importance of ensuring court participants are prepared. “There’s a very technical aspect to this discussion,” Johnson says. “And there is certainly a space for education… not just for individuals serving on a jury, but for the people who help juries.”

Most importantly, implementing comprehensive jury instructions that help jurors understand their evolving responsibilities in evaluating digital evidence allows them to gather valuable insight. And having judges allow jurors to ask questions about how AI is used in evidence during the process of screening questionable evidence can improve jury comprehension.

In addition, two additional actions for courts include:

Building a board of AI evidence experts — Courts may need to appoint AI-detection specialists to ensure fair evaluation when parties lack resources for private experts. Judge Yew points out that there is precedent to having a group of people whom the courts can call upon as experts on retainer, such as those used in competency hearings to determine an individual’s competence capacity to withstand a criminal case.

Offering “red flag” training — Courts should also enable attorneys, judges, and jurors to spot suspicious evidence through training. Indeed, court participants should learn how to scrutinize too good to be true evidence, particularly when original devices or documents are unavailable for examination, Grossman advise. Elaborate explanations for unavailability should trigger heightened scrutiny and potential expert analysis to verify authenticity before the evidence reaches a jury.

Finally, the panelists say they believe that, over the long term, any legal framework requires flexible rules that can adapt to rapidly evolving technology. Given the swift pace of AI development, rigid regulations would quickly become obsolete. Instead, adaptable guidelines that focus on principles like reliability, transparency, and fairness will better serve the future legal proceedings in which AI evidence is involved.


To learn more about AI evidentiary issues, visit the AI Policy Consortium hosted by NCSC and the TR Institute and the specific resources for judges

]]>
Competitor or collaborator? Navigating legal tech’s role in document drafting https://www.thomsonreuters.com/en-us/posts/legal/document-drafting/ Mon, 06 Oct 2025 13:17:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=67810

Key takeaways:

      • Competitive approaches create bottlenecks and dissatisfaction — Resisting AI integration in legal document drafting leads to inefficient workflows, increased errors, and an inability to meet evolving client expectations.

      • Collaborative approaches boost efficiency and value — Embracing AI as a collaborative tool streamlines document processes, enhances accuracy, and allows legal professionals to focus on higher-value strategic work.

      • AI integration elevates document drafting through automation — Leveraging AI in document drafting automates repetitive tasks, improves consistency, and enables lawyers to provide more strategic, data-informed contributions to their organizations.


The legal profession stands at an inflection point. As AI transforms document drafting and review work, legal professionals face a fundamental choice: Compete against these tools or collaborate with them. This isn’t merely about adopting new software; rather, it’s about reimagining how legal services operate.

The legal industry is witnessing a significant shift from the traditional pen-holder approach to document management toward a more dynamic, collaborative method. When lawyers compete with AI, they resist integration and view these tools as threats to human expertise. Collaboration treats AI as technology that amplifies legal professionals rather than replacing them.

Current data from the Federal Bar Association reveals the stakes: 31% of legal professionals now use generative AI (GenAI) at work, up from 27% last year, and nearly 80% of firms plan to leverage GenAI within five years. The question isn’t whether AI will transform legal practice; the question is whether legal professionals will shape that transformation or be shaped by it.

The competition approach: What happens when legal resists

Legal professionals who compete with AI demonstrate predictable resistance patterns. They delay technology adoption, maintain paper-heavy processes, and stick rigidly to traditional workflows. This approach introduces bottlenecks in complex documents requiring multiple specialists’ input. This pen-holder model, in which one person integrates various perspectives into a cohesive document, becomes increasingly inefficient under competitive approaches.

Indeed, the data tells a stark story. Despite digital document management advances, 86% of attorneys still prefer pen and paper. This preference creates operational bottlenecks and increases risks of document loss or damage. In fact, continued reliance on paper documents in a digital world transforms retrieval into time-consuming processes that clients increasingly will not tolerate.

Not surprisingly, document processing suffers measurably under these competitive strategies. Manual review processes take longer and produce more errors compared to AI-assisted approaches. Quality control remains entirely dependent on human oversight, becoming increasingly expensive and time-consuming for high-volume work like contract reviews and due diligence projects.

The market consequences are equally clear. Clients now expect law firms to use AI wherever and whenever possible to improve efficiency so their outside lawyers can focus on strategic thinking. Firms that take a competitive stance to AI usage struggle to meet these evolving expectations: Response times lag behind technologically advanced competitors; and pricing becomes less competitive as operational costs remain elevated while market rates adjust to AI-enhanced efficiency standards.

When legal speaks only in traditional terms rather than data-driven insights, it becomes the black box everyone struggles to navigate.

The collaboration approach: When AI enhances lawyers’ efforts

Collaborative approaches integrate AI as workflow enhancement rather than replacement technology. Co-authoring legal documents allows participants to view and edit documents simultaneously, ensuring changes are immediately visible to everyone involved. This creates hybrid workflows that combine human judgment with machine-processing capabilities while maintaining professional oversight.

The transition requires cultural adaptation alongside technological implementation. Successful collaboration demands that firms move beyond the ingrained preference for presenting polished final drafts toward embracing real-time collaborative processes. Training focuses on AI literacy while maintaining professional responsibility standards, which, in turn, helps legal professionals understand how to leverage technology without compromising quality or ethics.

The Federal Bar Association research demonstrates measurable benefits from collaborative AI adoption. At the firm level, 61% of respondents report that AI adoption has “somewhat” increased efficiency, while another 21% note significant efficiency improvements. Among practitioners using AI tools, 45% say they incorporate technology into daily workflows and 40% say they use it weekly. These users primarily leverage AI for drafting correspondence, brainstorming, and research tasks that previously consumed a disproportionate amount of time.

Collaboration also fundamentally transforms resource allocation. Repetitive tasks like document review and drafting become automated, freeing attorneys to focus on higher-value strategic work. Quality control evolves to incorporate both human expertise and algorithmic verification, creating more robust review processes than either humans or AI can achieve independently.

Further, cloud-based platforms can facilitate real-time communication, task assignment, document sharing, and collaborative editing regardless of geographical barriers. This technological infrastructure supports the kind of seamless collaboration that clients increasingly expect from modern legal services providers.

AI tool categories serve distinct collaborative functions. For example, contract analysis systems excel at extracting terms, identifying risks, and comparing provisions across sets of documents while humans provide strategic interpretation. Document drafting assistance provides template optimization, consistency checking, and compliance verification while lawyers maintain creative control. And due diligence platforms organize repositories, extract relevant information, and flag issues requiring human attention, enabling comprehensive review within compressed timeframes.

Smart legal contract management leverages advanced technology to redefine drafting, executing, and enforcing of agreements. When legal teams understand that contracts really are how businesses run and contain valuable data that often goes unnoticed, they can transform themselves from document creators into strategic intelligence providers.

The more that legal teams can track data points and use them to drive decision making, the more leadership values their contributions and understands their strategic importance.

Strategic implications: What the data shows

Direct comparison between these competitive and collaborative strategies reveals substantial operational differences. Collaborative implementations consistently demonstrate productivity advantages and enhanced accuracy across document categories.

Organization size significantly influences adoption success — Firms with 51 or more lawyers report 39% GenAI adoption rates, the Federal Bar Association research says, thus benefiting from dedicated technology teams and comprehensive training programs. Solo practitioners need streamlined solutions with minimal learning curves, but the fundamental benefits of collaboration remain consistent across firm sizes.

Implementation timing matters — With more than two-thirds (67%) of law firms planning document management system upgrades by 2025, according to Clio’s 2025 Technology Report, AI-driven features become essential for supporting strategic goals. Gradual implementation approaches achieve higher acceptance than rapid deployment strategies, but early adopters will be the ones to gain experience and client relationship advantages in evolving legal service markets.

Risk management remains paramount — Technology adoption introduces security, confidentiality, and professional responsibility considerations that collaborative approaches must address through robust protocols and ethical compliance frameworks. The goal isn’t efficiency at any cost but rather enhanced delivery of legal services that maintains professional standards while meeting modern client expectations.

The path forward

With 79% of law firm professionals incorporating AI tools into daily work, the profession has moved beyond asking whether to adopt AI toward determining how to implement it strategically.

Legal professionals should evaluate AI integration through structured analysis that considers practice requirements, client expectations, and competitive positioning needs. Success demands understanding that legal technology isn’t just about automation but about visibility and strategic value creation.


You can learn more about how the legal industry is adapting to the impact of GenAI here

]]>
Racing forward: Tax firm leadership strategies for the era of AI, advisory & private equity https://www.thomsonreuters.com/en-us/posts/tax-and-accounting/tax-firm-leadership-strategies/ Fri, 03 Oct 2025 14:46:22 +0000 https://blogs.thomsonreuters.com/en-us/?p=67840

Key takeaways:

      • Strategic focus is crucial — Firms with a clear, written strategy and marketing plan are out-earning peers, and vague ambition is no longer sufficient in today’s competitive tax industry landscape.

      • Advisory services drive growth — Advisory service lines, particularly investment advisory and proactive tax planning, are expanding faster than traditional compliance work, and firms must blend recurring compliance jobs with scalable advisory to smooth revenue cyclicality and deepen client relationships.

      • Technology and leadership are keys to success — Firms must harness the power of AI, automation, and private equity to drive growth, and prioritize leadership systems, professionalization of leadership, and culture by design to endure the next decade.


If the last few years felt like standing at a crossroads for tax, audit & accounting firms, 2025 is the turn itself. Consolidation, private equity, AI, and evolving workforce expectations have tipped the profession from gradual change into a full paradigm shift. The recently released Rosenberg Report illuminates the state of the profession. The annual report on the tax industry shows that those firms that win from here won’t simply be competent — they’ll be intentional, strategically focused, and relentless about converting capacity into higher-value client impact. For tax firm leaders, the mandate is clear: Make bold, data-informed choices now or wait and watch competitors outpaced you.

What the numbers are really saying

While revenue growth has cooled from the post-pandemic highs, settling near high single digits across the market, a striking share of that growth now is being powered by mergers and acquisitions, while organic expansion is proving harder to sustain. Meanwhile, income per equity partner has still edged upward, although profit growth lags revenue as costs, partner counts, and investment outlays rise.

The standout tax firms — especially those with higher billing rates and strong staff-to-partner ratios — are combing scale, leverage, and premium pricing to widen the gap between them and competitors. The message is clear: Profitable growth now depends less on squeezing more hours and more on getting the business model right.

Indeed, the report noted several areas in which tax firms leaders need to pay special attention.

Talent: Retention is better — but capacity isn’t the same as productivity

The report reveals turnover among tax professionals has fallen to its lowest level in years, which is a positive development. Yet billable hours per professional have declined, and many teams are logging less than 1,400 hours annually.

In response, some firms are hiring to build capacity, but revenue per full-time equivalent (FTE) employees slipped for the first time in five years — a signal that headcount without redesign is a blunt instrument. Offshoring and outsourcing remain in the toolkit, especially for larger firms, but as retention improves, the hiring mix is shifting from emergency capacity to structured, strategic resourcing. The imperative is smarter workload orchestration, not more bodies.

Strategy is no longer optional

Firms with a clear, written strategy and marketing plan are out-earning their peers, the report showed. That’s not correlation by accident — it’s the compounding effect of decisive prioritization. When leaders articulate where the firm will play and how it will win, then firm investments align with strategy, pricing reflects value, and teams understand how to move the needle. Having vague ambitions is expensive, precision pays much better.

Advisory is the growth flywheel

Advisory service lines — particularly investment advisory and proactive tax planning — are expanding faster than traditional compliance. The most resilient firms are shaping portfolios that blend recurring compliance jobs with scalable advisory roles, thus smoothing revenue cyclicality and deepening client relationships. Technology is central here because it doesn’t just compress the cost of compliance work, it liberates capacity that can be redeployed into offering advice for which clients will happily pay a premium.

Private equity & technology: Forces to harness, not fear

Private equity (PE) is no longer an outlier, it’s reshaping governance, accelerating M&A, and boosting tech investment across the top end of the market. Whether you choose to partner with PE firms or compete against PE-backed platforms, you must operate with PE-grade rigor — and that means sharper KPIs, faster decision cycles, and a clearer capital allocation model.

On the tech front, AI and automation clearly are transforming tax preparation, workpaper assembly, and research — often eliminating 50% to 80% of the manual steps in defined use cases. And the top performing firms don’t just use AI just to cut costs, they turn their teams’ freed-up hours into advisory projects, client education, and proactive planning conversations that can fortify loyalty and margins.

Leadership & succession: Redesigned for durability

Today, partner demographics are shifting quickly. There are more younger partners, more women advancing, and more diverse paths into leadership. Non‑equity roles and flexible buy‑in models are becoming standard, while mandatory retirement policies are moderating to support smoother succession.

Compensation and buyout systems are maturing as committees and transparent formulas replacing opaque, personality‑driven decisions. The firms that will thrive over the next decade already are professionalizing leadership the same way they professionalize client service.

The bottom line

Finally, there are several actions that smart tax firm leaders are already abandoning and others that they are strongly focusing on.

What to stop doing

      • Managing to utilization alone — Leaders need to shift their thinking to revenue per FTE, realization, and cycle time to reflect true performance.
      • Treating offshore resources as a plug‑and‑play fix — Integrate these resources into your firm’s standard processes with clear ownership and quality assurance.
      • Waiting for “post‑tax‑season” to improve systems —Improvement is a year‑round muscle that needs to be exercised. Schedule and track system improvement it like any client deliverable.

What to double down on

      • Focusing on client segmentation and ideal‑client fit — Politely winnow misaligned work or burdensome clients and reinvest those hours into high‑potential relationships.
      • Promoting manager leverage — Equip managers with the ability to own scoping, pricing, and coaching so partners can drive market‑facing growth.
      • Encouraging culture by design — Flexible work is table stakes in today’s environment. Promote what differentiates your firm, especially its clarity of mission, feedback cadence, and recognition systems.

The tax, audit & accounting profession’s fundamentals remain strong, but the rulebook has been rewritten, as the Rosenberg Report illustrates. Firm growth will increasingly come from strategy, not inertia; from advisory impact, not additional hours; and from leadership systems, not individual heroics.

Smart tax firm leaders need to treat 2026 as a pivot year for their firms. Publish the plan, price to value, operationalize AI, and convert freed-up capacity into advice offerings your clients can’t imagine running their businesses without.

Those tax firms that move first, while measuring what matters, will define the next decade of tax leadership.


For more on the current state of tax, audit & accounting firms, check out the recent 2025 State of Tax Professionals Report from the Thomson Reuters Institute here

]]>
Reducing invisible burdens in court administration through automation https://www.thomsonreuters.com/en-us/posts/government/reducing-burdens-automation/ Thu, 02 Oct 2025 17:18:59 +0000 https://blogs.thomsonreuters.com/en-us/?p=67716

Key insights:

      • Automation and AI can significantly alleviate administrative burdens in courts — Court professionals may be able to reclaim up to nine hours per week over the next five years, according to research.

      • Courts are under pressure to modernize and meet the expectations of digital natives — Courts are facing a generational shift in expectations that is pressuring them to adopt more modern tools and technology.

      • Successful implementation of technology requires a thoughtful and collaborative approach — Collaboration between judges, administrators, and IT staff is essential, and external-facing tools should prioritize user experience to reduce complexity and increase access to justice.


Bringing automation and AI-powered tools to data entry, case-filing processing, and updating court management systems over the next few years could help court professionals use their time more efficiently, according to the Staffing, Operations and Technology: A 2025 survey of State Courts from the Thomson Reuters Institute and the National Center for State Courts (NCSC).

Indeed, the report found that alleviating this invisible administrative burden could help professionals reclaim as much as nine hours per week over the next five years. As private sector law firms embrace automated technology, public sector legal departments and courts risk falling further behind.

The time for innovation is now, as caseloads mount, case complexity increases, and retirements and staffing shortages continue to plague courts. Fortunately, administrative professionals are beginning to warm up to targeted automation efforts and AI-powered tools to expand their efficiency.

The cost of administrative burdens

A Georgetown University study produced for the Administrative Conference of the United States defines administrative burdens as “onerous experiences people encounter when interacting with public services.” And unfortunately, many people do not access the rights or benefits to which they are entitled because of these onerous administrative processes within stressful, frustrating, and overwhelming government systems. In a legal context, administrative burdens hinder access to justice. In fact, low-income Americans did not receive any legal help or enough legal help for 92% of the problems that impacted their lives, according to the Georgetown study.

Recent years have seen a dramatic rise in self-represented (pro se) litigants in civil cases. Given this, the processes that were designed for navigation by attorneys and legal and court professionals need to be simplified to reflect the needs of non-professional court users. A Pew Research study on experiences with state courts in particular notes that court users strongly desire courts to be easier to navigate. Even among those who had previous court experience, 50% indicated that it was a little hard or very hard to navigate court paperwork and steps in a case.

A modernizing court workforce

Millennial-aged workers constitute approximately 75% of the workforce and are the most prevalent court users today and in the foreseeable future. As digital natives, this generation expects modern tools when navigating the legal system.

A State of the State Courts poll commissioned by the NCSC last year found that large percentages of registered voters surveyed support increased use of AI chatbots to answer court FAQs (with 63% saying this), using AI to translate court documents into other languages (64%), and using AI to break down complex legal jargon and make information more accessible (71%).

Further, this lack of modernization in courts has consequences for judges and court professionals as well. Court staff are feeling strained by their workload, and many report simply not having enough time to catch up. More than half (57%) of court professionals and administrative staff reported not having enough time, according to according to the Staffing, Operations and Technology report.

The report also found that 91% of court staff report working more than 40 hours each week, with about one-third of them working more than 46 hours per week.

automation

Given all this, the pressure courts are under to modernize is understandable; however, it should be looked at as an impetus for improvement: Courts face a once-in-a-generation opportunity to reimagine their workflows.

Resources available to fund statewide technology improvements

Several states leveraged one-time resources available through the American Rescue Plan Act of 2021 (ARPA) to fund major investments in court technology. The Kentucky courts’ Administrative Offices of the Courts (AOC), for example, used $38 million to update a two-decade-old in-house case management system. (The AOC is the operations arm of the state court system, which supports 3,000 employees and more than 400 elected justices, judges, and circuit court clerks.) Kentucky courts’ AOC selected a third-party technology provider that offers online tools for judges, circuit court clerks, attorneys, as well as a tool for pro-se litigants.

On the other hand, Arkansas courts’ AOC opted to build its own in-house court management system, as the cost was significantly less than vendor rates. Initial estimates to upgrade a legacy system were $70 million, and Arkansas was able to build its own for $20 million, funded through ARPA appropriations that came from the state legislature. Indeed, Arkansas has been a leader in court technology for more than 20 years and signed contracts for automated document redaction more than a decade earlier.

The state courts new customized cloud-based solution incorporates multiple vendors, and the development process (now two years underway) has launched Contexte Case Management, an internal facing tool, and Search ARCourts, a public-facing case information tool. All AR appellate and circuit courts are using Contexte and nearly half of district and juvenile courts already have implemented the system.

Moving forward, slowly and thoughtfully

While private sector legal technology has advanced quickly, courts face unique challenges that often make off-the-shelf solutions an inadequate fit. Investment in court modernization must balance the efficiency gained with fiscal responsibility around such investment.

Successful implementation in courts will take cultural, procedural, and budgetary shifts. Internally, collaboration between judges, administrators, and IT staff is essential; and externally, any public-facing tools should center around user experience and ease-of-use, perhaps offering a dedicated customer service team to guide users so that technology reduces complexity rather than adding to it.

The real return on investment in court systems will be realized when all users can access justice more easily, equitably, and reliably.


You can download a full copy of the Staffing, Operations and Technology: A 2025 survey of State Courts from the Thomson Reuters Institute and the National Center for State Courts AI Policy Consortium for Law and Courts here

]]>
Beyond adoption: How professional services can measure real ROI from GenAI https://www.thomsonreuters.com/en-us/posts/technology/measuring-genai-roi/ Thu, 02 Oct 2025 13:08:28 +0000 https://blogs.thomsonreuters.com/en-us/?p=67679

Key takeaways:

      • Strategic alignment drives ROI — Organizations that implement GenAI with a clear, formal strategy aligned to their broader business goals, such as revenue growth or client experience, are able to find stronger ROI measurements than those adopting AI informally.

      • Measuring GenAI requires more than basic metrics — While many firms currently track simple, internally-focused metrics like cost savings and user adoption, true value from GenAI comes from mapping its use to strategic outcomes such as revenue generation, operational efficiency, and client satisfaction.

      • AI strategy aids measurement capabilities — Despite increasing adoption of GenAI tools, less than one-quarter of professional services organizations have a visible AI strategy, according to our research, which decreases their ability to properly measure GenAI’s organizational impact.


At this point of the lifecycle of generative AI (GenAI), most individuals across the professional services world have a conception of what GenAI is and what it can do. Indeed, 96% of respondents had at least a basic understanding of AI principles, according to the 2025 Future of Professionals report, which surveyed corporate, legal, tax & accounting, and government professionals.

With that in mind, most organizations are prepared to take the next step: Making GenAI an integral part of their operations and measuring its direct impact on the organization. It’s a natural progression, as individual use of publicly available GenAI technologies such as ChatGPT or Claude turns into institutional investment in business-centric tools such as Microsoft Copilot or industry-specific GenAI tools.

Of course, organizational leaders whose teams are using these tools want to see how much these tools really help, and attempt to quantify GenAI’s return-on-investment (ROI).

However, those that have undertaken the ROI exercise have found that arriving at an answer may be easier said than done for a number of reasons. Many professionals are just beginning with the tools and have not yet fully integrating them into their workflow, which makes the true impact of GenAI harder to measure. Determining the time saved by AI tools requires an intricate knowledge of how these professionals work on a daily basis; and most professional services firms are not yet talking to their outside clients about GenAI, making calculations around business won or client satisfaction next to impossible to compute.

That said, however, there already are some simple ways to start to map GenAI usage to a set of ROI metrics. It starts with knowing what your organization wants to achieve by using GenAI.

Mapping use cases to goals

GenAI, as is the case with all business-oriented technologies, should not be treated as a goal in itself. When determining metrics around AI use, start with the organization’s primary set of strategic initiatives then extrapolate from there.

For instance, increasing revenue is a way 81% of C-Suite respondents say they measure success, according to the Thomson Reuters Institute’s recent 2025 C-Suite Survey. GenAI, therefore, should be rolled out with this in mind, with potential use cases for the technology aimed squarely at increasing revenue such as by delivering stronger market analysis and predictive analytics for client issues. If instituted with the larger revenue goal in mind, the ultimate metric for the technology’s success then is not simply usage, but how well the technology actually contributes to revenue gains.

The chart below from the Future of Professionals Report provides some examples from a law firm perspective of how other organizational goals can lead to ROI metrics, including bolstering the client experience, creating operational efficiencies, and attracting and engaging talent. Other industries such as tax, audit & accounting; government agencies; and courts have their own sets of goals that can be adapted in the same fashion.

GenAI

GenAI is a powerful tool particularly because of its versatility. While many past technologies aimed at professional services were focused squarely on one or two use cases, GenAI, as demonstrated above, can be adapted to serve a number of different uses and goals. As a result, implementing these use cases — and crucially, measuring their success — requires more strategic planning than past technologies.

The importance of strategy

Even with the rate of GenAI adoption continuing to climb, formal AI strategies are not climbing at the same rate. The Future of Professionals report found that just 22% of respondents say their organizations have a visible AI strategy, while 43% say their organizations are moving ahead with adoption despite having no formal strategy in place. About one-third of respondents, meanwhile, say their organizations have no significant plans for widespread adoption.

Unsurprisingly given the above, this lack of strategy has a tangible impact on measurable ROI, particularly as it relates to underlying revenue. The report notes that organizations with a strategic AI plan are almost twice (1.9-times) as likely to already be experiencing revenue growth as a result of their AI investment than those organizations that are adopting AI informally. Similarly, 81% of respondents at organizations with an AI strategy report seeing some sort of positive ROI from AI; only 64% of respondents at organizations adopting AI informally say the same.

GenAI

Measuring proper ROI from GenAI implementation is not an impossible undertaking, but at the same time, it is not an easy proposition. The Thomson Reuters Institute’s 2025 Generative AI in Professional Services Report from earlier this year found that even of those organizations measuring GenAI’s impact, the most common metrics were simple and often internally-focused, such as internal cost savings, user adoption, and user satisfaction. Metrics focused on client satisfaction or external revenue generation, meanwhile, were tracked by less than 40% of organizations, according to survey respondents.

That is the wrong way to approach AI measurement, particularly in a professional services landscape that expects GenAI (and soon, agentic AI) to become a central part of the profession’s workflow within the next five years. If GenAI is becoming so crucial to the organization, then its measurement should be based not on simple technology metrics, but on larger strategic metrics for the organization.

And that means, for organizations without an AI strategy that links to the larger organization’s overall strategy, the time to begin that planning in earnest for the AI-driven future has arrived.


You can download your copy of the 2025 Future of Professionals Report here

]]>
The 2025 State of the Corporate Tax Department report: How new tech tools are helping departments manage change https://www.thomsonreuters.com/en-us/posts/corporates/state-of-the-corporate-tax-department-2025/ Wed, 01 Oct 2025 14:06:01 +0000 https://blogs.thomsonreuters.com/en-us/?p=67730

Key findings:

      • Tax departments gaining strategic relevance — Corporate tax departments are gradually gaining more strategic relevance within their organizations; however, their efforts to move beyond traditional compliance roles are often hindered by competing priorities, tight budgets, a chronic talent shortage, and the challenges of upgrading to new technologies.

      • Departments cite top priorities — The top priorities for tax leaders in 2025 include improving tax compliance accuracy, navigating uncertainty, keeping up with new tax legislation, adding tax technology tools, and automating more processes to counter what many see as a lack of resources and qualified staff.

      • Adoption of new technologies is key — The adoption of new technology solutions, including automation and AI, is on the rise. While many companies are still transitioning from legacy tech systems, a significant number of tax departments plan to introduce more technology and automation in the coming year.


Corporate tax departments have been trying for years to move beyond what many in upper management see as simply a compliance role. Now, tax function leaders are seeking to redefine their departments as a source of more strategic, proactive intelligence that can add value to their organizations. New tax technologies, automation, and more centralized data management have certainly given tax departments the means to become more strategically relevant, but progress toward that goal has been slower than many expected.

Jump to ↓

2025 State of the Corporate Tax Department

 

Indeed, according to the 2025 State of the Corporate Tax Department Report, published by the Thomson Reuters Institute and Tax Executives Institute, many well-intentioned corporate tax departments are still navigating through a familiar maze of organizational obstacles, including tight budgets, a chronic talent shortage, and the challenges of upgrading — and adapting to — new technologies and systems.

In addition to various internal struggles, the report also explores how the volatility and unpredictability of today’s political environment is affecting tax leaders at some of the largest companies in the world.

Priorities and challenges

The report surveyed more than 250 senior decision-makers in corporate tax departments worldwide to gain insight into tax department leaders’ strategic priorities and most pressing challenges, as well as their views on technology, resources, budget, and staffing.

According to the report, tax leaders’ top priorities have not changed much in the past few years, with this year’s top priorities including: tax compliance, tax planning and strategy, keeping up with new tax legislation, adding tax technology tools, and automating more processes.

Corporate Tax Department

Not surprisingly, survey respondents cited numerous challenges to achieving these priority goals. And while familiar challenges — such a chronic talent shortage and ever-changing regulations continued to make the list — one factor vaulted to the top this year, navigating the market uncertainty caused by shifting political alliances, fluctuating tariffs, and changes to the United States tax code.

The report emphasizes that uncertainty about tariffs, trade routes, tax regulations, filing rules, and supply-chain security has emerged as a major concern. This is especially true for tax professionals within large multinational corporations, whose departments are currently engaged in an urgent push to understand how these complex geopolitical factors might impact their respective enterprises around the world and what they can do about it.

Also not surprisingly, another top challenge was managing digital transformation, which includes the complex process of implementing new systems, tools, and processes, including automation and AI. According to the report, a majority of tax professionals say their companies (70%) are still navigating the transition from legacy systems and processes to more centralized, automated systems that give departments the time and tools they need to engage in more proactive tax management.

While adoption of new technologies is on the rise, more than half of this year’s survey respondents say their departments plan to introduce more technology and automation in the coming year with more than half saying their department’s new technology would include generative AI.

Moving toward a more strategic and proactive stance

Another major theme explored in the report is the ongoing effort by tax professionals to do less tactical or reactive compliance work and more strategic and proactive data analysis and forecasting. Currently, tax professionals say they are spending more than half their time on tactical and reactive work but would prefer to spend less.

This desire to spend more time mining business intelligence from tax data has been on many tax departments’ wish list for several years. In most cases, however, a department’s ability to free up its tax professionals to devote more time to proactive pursuits is directly tied to the available resources. Yet, resource scarcity continues to be a thorn in the side of many departments, with 58% of respondents claiming that their department is under-resourced, up from 51% in 2024.

To address their resourcing issues, many departments are pursuing a three-pronged strategy, the report notes, that includes incorporating new technologies, hiring more qualified tax professionals, and outsourcing a portion of routine compliance and audit functions.

Interestingly, the report reveals that the tax professionals most likely to report that their departments are not under-resourced are those from smaller companies (those with less than $50 million in annual revenue) and larger companies (those with more than $1 billion in annual revenue).

Meanwhile, midsize companies are the ones most likely to struggle with resourcing issues, the report shows, due mainly to less robust budgets, complex infrastructure issues, and something of a wait-and-see attitude toward adopting more advanced automated technologies. However, the report also notes that many midsize companies are also in the midst of technological transitions that should put them in a more advantageous position within the next year or two.


You can download

a full copy of the “2025 State of the Corporate Tax Department”, from the Thomson Reuters Institute and Tax Executives Institute, by filling out the form below:

]]>
2025 Emerging Technology and Generative AI Forum: Human creativity and feedback drive ethical AI adoption https://www.thomsonreuters.com/en-us/posts/technology/emerging-technology-generative-ai-forum-ethical-ai-adoption/ Tue, 30 Sep 2025 14:45:38 +0000 https://blogs.thomsonreuters.com/en-us/?p=67743

Key takeaways:

      • Embrace value, risk, and execution — for good and bad — Professional services firms must weigh the value of AI applications against potential risks, embracing both successes and failures as learning opportunities to improve responsible adoption.

      • Ethical oversight is everyone’s responsibility — Ensuring responsible AI use in professional services requires active participation from all members of an organization, not just legal or IT teams.

      • Human creativity and feedback remain essential — While AI can generate ideas and accelerate processes, human judgment, creativity, and continuous feedback provide the proper pathways for ethical decision-making and successful integration.


AUSTIN, Texas — With the professional services world now squarely into the AI era, it’s clear that the speed of business is quicker than ever. Clients expect results in hours or even minutes rather than days, while generating documents can happen at the click of a button. Ask a research question, and a machine can intuit what you’re looking for with striking accuracy.

Alongside these business changes, however, it’s clear that the ethics of technology usage within professional services is shifting just as quickly. “Every time you come and do a talk with a group of people, within four weeks if not sooner, it’s changed,” says Betsy Greytok, Associate General Counsel in Responsible Technology at IBM. “So, it really does require you to keep on your toes.”

Ensuring that AI is used responsibly is paramount within professional services than in other professions, given the ethical and regulatory constraints placed on legal, tax, audit & accounting, financial services and risk, and more. During a recent session, A Unified Field: Ethical Considerations amid AI Development and Deployment, at the Thomson Reuters Institute’s 2025 Emerging Technology and Generative AI Forum, panelists describe an ethical world that should be tackled as a challenge, rather than shied away from as an unsolvable risk.

Or, as Paige L. Fults, Head of School at the AI-centric Alpha School & 2-Hour Learning program, put it: “Not being afraid of replacement, but leaning into repurpose.”

Embracing success — and failure

John Dubois, the Americas AI Strategy Leader at Big 4 consultancy Ernst & Young, says he regularly gets questions from customers about AI and how they should use it, given that there are new AI applications arising seemingly every day. “The way we describe it is a balance,” Dubois explains. “Let’s start with value. If we know there’s value in something, then we can figure out the risk behind it, then we can figure out how we can execute.”

Just as importantly, however, this focus on value, risk, and execution can also aid professional services firms when an AI plan fails. For example, Dubois cites an MIT report from August 2025 that showed 95% of GenAI pilots fail, often because of flawed integration. Embracing the value, risk, and execution strategy from the beginning not only allows for better chances of success, but even in the event of failure, “we actually have a better shot at mitigating, when it does fall down.”

This sort of planning is not limited to just one group, Dubois says, noting that ethical oversight is seen as a key responsibility of everyone in the organization. He explains that E&Y has an internal implementation of OpenAI that has 150,000 distinct users each month. Because of an internal process called SCORE that removes customer data at the source, E&Y’s instance of OpenAI is largely clear of customer data — but it’s still not perfect.

E&Y has set a culture so that if someone sees proprietary data when using GenAI to develop a proposal or create a PowerPoint, they not only delete the data before use, but work to scrub it from the system entirely. “It is all of our job to ensure that whatever you’re putting into that system or extracting out of that system, you’re cleansing,” Dubois says. “It’s not the job of the general counsel, or the risk team, or the IT team, it’s all of our job.”


When it comes to keeping up with AI ethics in a rapidly advancing space, professionals can rely on the same methods they have been employing for years to solve ethical quandaries: human creativity.


IBM’s Greytok agreed, noting that she’s part of an internal review board that examines major AI-related projects for ethical issues. There is a board review at the beginning of the development process to determine how risky a use case is, and then the system will give a response, considerations, and steps. If there is an issue, the board is empowered to stop development, even on a major project.

She drew an analogy to writing a paper in high school, in which there is a marked difference between simply turning in the paper, proofreading your own work, and asking a friend for peer review feedback. “That’s what you want, is that disagreement, because that’s critical thinking.”

She adds: “The researchers sometimes get so excited about what they’ve discovered that they forget to look at the other side of what can happen. You should want that. You shouldn’t be punished for saying, Is this the right thing or not?

The importance of feedback

Fults says that at the Alpha School, AI is not only baked into the curriculum, it functions as the teacher. Students spend just two hours a day on academics, led by AI tools that are supplemented by off-line learning on a variety of subjects by in-person instructors that fill in the gaps that AI is not able to provide.

It’s a revolutionary concept but not a static one. Fults notes that “the two-hour learning model has already changed so much since I’ve been part of the school,” and the instructors have a Slack channel on ways to find improvement that receives hundreds of messages a day.

It’s through this marrying of human intuition and the possibilities of the technology that Fults says she believes the school has found success and used AI ethically within education. “Even though we have this tool, the human levers, the motivational levers that are happening day to day, actually make it work,” she says, insisting that she “can’t just hand [the technology] to any school” without the corresponding processes in place.

Dubois and Greytok also call feedback a crucial part of the process in order to overcome AI barriers. Dubois tells the story of a large retailer that bought satellite images to determine footfall within a store. Shoppers, however, felt that was a privacy risk, and the idea was almost scrapped. Then, however, the legal and IT teams worked together to come up with an idea: Can you track clothing, but not faces, to get the same information of where within the store shoppers were going?

“It’s a creative workaround to get us to the same thing,” Dubois explains. “When you have a constraint, what’s a clever way to work around this so we’re not taking a brand risk or a compliance risk?”

Indeed, when it comes to keeping up with AI ethics in a rapidly advancing space, professionals can rely on the same methods they have been employing for years to solve ethical quandaries: human creativity. AI can provide information and context more rapidly than ever before, but ultimately, professionals themselves will be the ones relied upon to make sure AI is used ethically and responsibly.

“AI is an idea generator,” Greytok says. “The solution comes from the human.”


You can find out more about how emerging technologies are impacting professional services here

]]>
Augmenting justice: A practical framework for AI in judicial workflows https://www.thomsonreuters.com/en-us/posts/ai-in-courts/augmenting-justice-framework-judge-schlegel/ Mon, 29 Sep 2025 13:28:47 +0000 https://blogs.thomsonreuters.com/en-us/?p=67689

Key insights:

      • Stewardship over speedCourts shouldn’t rush to adopt AI; they should implement it deliberately with policies, training, and review protocols that align with judicial ethics.

      • Human judgment is non‑negotiableAI can streamline research and drafting, but interpretation, credibility assessments, proportionality, and equitable discretion must remain human — and handled by the right person at the right decision points.

      • Phased, role‑aware integrationA practical, 10‑phase framework enables incremental adoption across varying readiness levels, emphasizing clear boundaries, verification of outputs, confidentiality controls, and accountability to preserve judicial integrity.


As AI moves from novelty to infrastructure in professional practice, courts face a pivotal question — not whether to use AI, but rather how to implement these tools responsibly.

Judge Scott Schlegel of Louisiana’s Fifth Circuit Court of Appeal has become a careful and credible voice in this conversation. Drawing on active judicial experience, Judge Schlegel has published practical guidance and suggested guardrails in a new paper, AI in Chambers: A Framework for Judicial AI Use.

In a recent discussion, he outlined how courts can harness AI’s strengths without compromising the integrity, independence, and wisdom that define sound adjudication.

Why is this framework needed now?

AI’s rapid evolution presents both opportunity and risk. According to Judge Schlegel, technology has reached a stage in which judges must exercise independent judgment in deciding how and when to deploy advanced technology. The judiciary need not be first to adopt new tools; rather, it must be right in how it adopts them. That measured stance reframes innovation as a matter of judicial craft — the question is not speed, but stewardship.

Judge Schlegel’s 10‑phase implementation framework is built from lessons learned in chambers, not a laboratory. Its purpose is to help courts establish boundaries, define roles, and stage adoption in a way that is consistent with judicial ethics and institutional realities. The framework provides a clear on‑ramp for courts at different levels of readiness, emphasizing that successful integration is a process, not a single event.

framework
Judge Scott Schlegel

The initial step, as Judge Schlegel describes, is deceptively simple. “Step 1 is the most important, and that is to do your job,” he writes. AI can accelerate tasks such as drafting or research triage, but it cannot — and must not — replace the uniquely human functions of judging. Interpretation, deliberation, credibility assessments, proportionality, and the exercise of equitable discretion remain irreducibly human. Properly implemented, AI frees judges and chambers staff to focus more attention on those human functions rather than less.

Having the right human in the loop

Much commentary urges keeping a human in the loop; however, Judge Schlegel suggests going further, emphasizing the need to place the right human at the right points in the workflow. Not every participant in chambers must or should use AI for every task. The key is calibrated involvement: Identify decision nodes in which human judgment is critical, and ensure those decisions are made by the appropriate judicial officer or trained staff member. In other words, governance is not satisfied by mere human presence, rather it requires intentional role design and accountability.

Judge Schlegel further cautions against universal, simultaneous adoption. Not every judge needs to begin using AI immediately. However, what every court does need is a shared foundation — policies, training, and review protocols — that clarifies those tasks in which AI belongs, where it does not, and how outputs will be verified. His framework is designed to be accessible and scalable, and able to support judges who are early in their learning curve as well as more advanced users that wish to experiment within defined guardrails.

Guardrails that preserve judicial integrity

Responsible implementation turns on a few themes that run through Judge Schlegel’s framework. Verification requires structured review of AI outputs, including fact‑checking and citation validation, before those outputs can influence judicial reasoning or orders. Confidentiality and privilege demand clear limits on what materials may be processed by AI tools and under what data‑handling terms, particularly situations in which sensitive information or sealed records are involved. Finally, training and change management matter because effective adoption strongly depends on equipping judges and staff with the skills to use AI judiciously and to recognize where it could potentially fail.

Overall, treating AI as a shiny new tool is less helpful than recognizing it as a set of capabilities that, when properly governed, can expand a court’s capacity to deliver timely, well‑reasoned justice. The goal is not to automate judgment, but to support it. When AI accelerates routine drafting or organizes complex records more efficiently, chamber staff can devote more attention to the hard work that only people can do, such as weighing credibility, interpreting precedent, crafting remedies, and explaining decisions in ways that foster public trust.

Moving forward

Judicial adoption of AI will be judged not by novelty but by fidelity to first principles. Judge Schlegel’s message is clear: Courts do not need to be first, but they must get it right.

A phased framework, as he outlines, that dictates the placement of the right humans at the right points, and a disciplined focus on the core judicial function, when taken together, can provide a path for responsible integration. With those commitments in place, AI can help courts do more of what matters most — delivering justice that is timely, transparent, and trustworthy.


You can find out more about how courts are managing their transition to a more AI-driven environment here

]]>
Unmasking human trafficking: A collective fight to end sex trafficking & exploitation https://www.thomsonreuters.com/en-us/posts/human-rights-crimes/unmasking-human-trafficking/ Fri, 26 Sep 2025 14:27:39 +0000 https://blogs.thomsonreuters.com/en-us/?p=67612

Key highlights:

    • Human trafficking is a local problem — Contrary to the stranger danger myth, trafficking primarily involves emotional manipulation and targets vulnerable populations locally.

    • Prevention requires talking to men and boys — Discussions should be held about how online pornography, and the impact of their visits to strip clubs is furthering the sexual exploitation of women and indirectly fueling sex trafficking.

    • Human trafficking is a criminal enterprise — This enterprise is operating in the shadow economy, with profits that fuel the world’s second-largest illicit financial enterprise.


An estimated 50 million people are currently living in modern slavery globally, a stark reality often hidden in plain sight. Indeed, World Day Against Trafficking in Persons, established by the United Nations in 2013, serves as a crucial reminder that human trafficking remains one of the most pressing human rights challenges of our time.

A recent Thomson Reuters Institute webinar in observance of this day brought together experts from technology, survivor services, and law enforcement for a discussion to deepen the collective understanding of trafficking’s complexities, examine its devastating impacts on victims, and develop strategies to drive meaningful change.

Debunking myths around human trafficking

Human trafficking is often misunderstood, with common misconceptions including the stranger danger myth that most trafficking situations come from anonymous kidnappers. However, Kristin Boorse, CEO of Spotlight, a nonprofit group helping law enforcement on the front lines of domestic minor sex trafficking, notes that trafficking often involves emotional manipulation by traffickers who target individuals from vulnerable populations, including youth without housing and children in foster care and juvenile justice systems.

Bianca Davis, CEO of New Friends New Life, which provides comprehensive care to human trafficking victims, explains that between 95% and 97% of the survivors with whom she works are local. And, according to the U.S. Department of Homeland Security (DHS), fraud and coercion are more prevalent than brute force in trafficking cases. In fact, traffickers frequently use master manipulator tactics and trauma bonds to exploit existing vulnerabilities.

Evolving approaches to combating trafficking

The fight against human trafficking requires a multi-faceted approach that involves cooperation of technology companies, survivor support organizations, and law enforcement agencies. More specifically, this approach can include:

Use and scale of technology — Boorse states that “technology has changed everything” in the anti-trafficking space and has made it easier for offenders to exploit victims. At the same time, technology is also providing opportunities for identifying survivors and offering support. Spotlight has developed innovative tools that help law enforcement and service providers identify trafficking situations and connect survivors with vital resources.

“Trying to identify a child victim in this mound of data is like trying to identify a needle in a haystack, but we aren’t looking for needles, we’re looking for children,” Boorse explains, adding that Spotlight leverages data and AI to help with the identification of some of the most vulnerable children. “We aim to reduce the time it takes to identify a victim from months to minutes. The speed of identification has a direct relationship to recovery and reduces the amount of time a victim remains in trauma.” With the help of technology, Spotlight has helped investigators identify more than 26,000 children.

Holistic approaches to support survivors — Comprehensive survivor support and compassionate care are key parts of the equation when stopping human trafficking. A model to mirror is the one used by New Friends New Life, which provides a range of services, including housing partnerships, emotional trauma support, and economic empowerment through job training and education. Davis emphasizes that rebuilding trust is crucial, and “the whole bunch of love approach” is essential in supporting survivors.

Collaboration with law enforcement and survivor care providers — The protracted nature of prosecuting human trafficking cases makes cooperation between law enforcement and those nonprofits which support survivors critical. Indeed, there often is a 24- to 36-month time span before the trial starts or the trafficker is convicted through a plea deal. This is a long time to keep a victim engaged, and this lengthy timeline can strain that engagement, especially given the trauma and instability survivors often face.

Because of this challenge, law enforcement relies heavily on robust partnerships with service organizations. The hyper-focus on the welfare and the well-being of the victim is key so that law enforcement can then focus on working with the prosecutors on the case.

In addition, collaboration on expanding awareness of traffickers’ recruitment methods in digital spaces is essential. And another key challenge lies in public awareness around the shifting landscape of trafficking online. More awareness and education in our schools and within our own homes are needed, primarily about how offenders use social media and other online platforms to identify and get their foothold on potential victims. Sadly, most minor victims — disproportionately over 95% — are recruited on Instagram, according to DHS.

Addressing root causes is key to prevention

Of course, the best way to prevent sex trafficking is to stop it before it starts, and effective prevention demands a multi-faceted approach starting with early intervention and addressing both vulnerabilities and demand. For example:

Give vulnerable minors love and acceptance — Boorse emphasized the importance of “looking back at the family unit,” noting that “one of the most critical pieces is… this feeling of being loved and accepted, obviously with appropriate boundaries.” She argued that fortifying family and community relationships from early childhood onward can help build emotional resilience and provide children with the strong foundation they need to resist exploitation.

Addressing demand is equally vital — Human trafficking is the second largest and fastest growing criminal enterprise in the world, which means it is a business. “If we are ever going to end this issue, we have to address the demand — and that means talking to our men,” Davis states. More specifically, there is a strong need to educate men and boys about the impact of seemingly “normalized” behaviors, such as consuming sexually exploitative online pornography or visiting strip clubs because “these normalized behaviors fuel a criminal industry,” she adds.

Everyone has a role to play

Combating human trafficking is a collective responsibility that requires education and action from many sectors of society. Everyone has the opportunity to educate themselves and support organizations like Spotlight, New Friends New Life, and the DHS Blue Campaign. In addition, reporting suspicious activity to the DHS tip line (1-866-DHS-2-ICE (1-866-347-2423) or the National Human Trafficking Hotline at 1-888-373-7888, as well as advocating for meaningful policy changes in local communities are other ways to help.

Only through shared commitment and action, can we build a world in which exploitation no longer has a foothold and all people can be free from the devastating exploitation of human trafficking.


Learn more about human trafficking and human rights crimes through the Thomson Reuters Institute resource center.

]]>
From hours to outcomes: How alternative pricing models are redefining tax firm profitability https://www.thomsonreuters.com/en-us/posts/tax-and-accounting/alternative-pricing-models/ Thu, 25 Sep 2025 12:14:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=67622

Key takeaways:

      • Subscriptions are a high-value option — Subscription-led pricing correlates with the highest value confidence and steadier revenue compared to hourly or fixed-fee models.

      • Three pricing packages evolve — Three tier packages (basic, standard & premium) create a clear value ladder and enable increased customization through modular add-ons.

      • Regular billing cycles help — Monthly or quarterly billing cadences improve transparency, client trust, and firm cash flow.


Tax, audit & accounting firms are in the middle of a pricing reckoning. Clients want clarity, firm leaders want confidence, and teams want to escape the treadmill of selling hours. The firms pulling ahead aren’t just raising rates, they’re re-engineering how they define and deliver value. Packaging, bundling, and especially subscription-based pricing are allowing firms to price with conviction, increase margins, and deepen client loyalty. The shift is not cosmetic, rather it’s a strategic reset from billing for hours to being paid for outcomes instead.

The confidence advantage of subscriptions

According to the recent Thomson Reuters Instiitute’s 2025 Tax Firm Pricing Report, firms that have adopted subscription billing for most clients, tax professionals’ confidence in the value they’re providing is materially higher than when hourly or fixed-fee pricing models are used. Indeed, nearly one-third of tax professionals in subscription-first firms say they are highly confident that their pricing aligns with the value delivered, compared to less than 20% of those professionals in firms that use in hourly pricing.

Why the gap? Subscription pricing models reframe the client relationship around results, not individual tasks. These models anchor expectations, create continuity, and prompt ongoing conversations about progress and outcomes. They also bring predictability — steady revenue for the firm and transparent costs for the client.

Conversely, hourly and even traditional fixed-fee pricing models struggle to tell that story. They describe inputs and deliverables, while subscriptions describe impact.

Despite the benefits, firm adoption of subscription pricing is still in its early stages. Only a small portion of client engagements are currently based on subscriptions, although that share is growing rapidly. This gap is an opportunity for many tax, audit & accounting firms and their leaders. Indeed, the invitation is clear: Firm leaderss should identify those offered services in which outcomes compound over time — such as tax planning, strategy, compliance along with advisory — and transition those into ongoing, subscription-based relationships with clients.

Design services like products: The 3-tiered architecture

Modern pricing gains power from clarity. That’s why the most effective firms are organizing their services offering catalog into three simple tiers — basic, standard, and premium — which then allows for additional customization through modular add-ons.

alternative pricing

This architecture does three things well: First, it creates a value ladder that allows firms to guide their clients to the right entry point while giving them a clear path to upgrade; second, it standardizes delivery, improving margins and team efficiency; and third, it enables customization without chaos.

Rather than reinventing a customized scope for each client, firms use defined add-ons — education planning, entity structuring, succession planning, and more — to tailor engagements to the client while maintaining operational consistency across all services.

Earning (and keeping) your fee increases

The best-performing firms aren’t timid about fees. They’re raising prices — and keeping clients — because they’ve reframed the value conversation. Instead of talking about more hours or complexity, they instead talk about the kind of outcomes that clients actually care about: peace of mind, risk reduction, strategic clarity, and measurable savings. The tax professionals bring real examples, case studies, and ROI to the table, and they benchmark. They review pricing annually or even quarterly, and they communicate changes to their clients in a way that feels transparent, justified, and aligned with client goals.

This is a pivotal shift for many tax, audit & accounting firms. The professionals at these firms have learned that when clients understand the outcome, the price makes sense; and when they don’t, the conversation reverts to cost. Packaging and subscriptions make this communication repeatable. In this environment, tiers create contrast, add-ons create choice, and benchmarks create external validation. Together, these factors shift the dialogue with clients from How much? to What’s the impact? — and that’s a win for firms.

Predictability is a service

If trust is the currency of advisory work, predictability is the interest it earns. Monthly or quarterly billing rhythms can reduce friction, improve cash flow on both sides, and transform tax from a once-a-year scramble into an ongoing partnership. Sending out clear, consistent invoices mapped to packages and add-ons can reinforce the story of value delivery. Internally, predictable revenue can smooth seasonality within a firm, supporting hiring and capacity planning, and reducing the temptation to discount prices under pressure.

Customization at scale

Clients want to feel known, and your tax team needs to stay sane. The answer isn’t to create bespoke products for everything, rather it’s to encourage segment-smart design. Build packages for common client profiles by industry, entity type, size, or lifecycle stage, then equip your team with modular upgrades that align to clear outcomes. This allows tax advisors to make confident recommendations, identify retention risks early, and adjust scope based on profitability and feedback — all without blowing up workflows.

Think like a product organization: Define standard features, articulate premium benefits, and maintain a disciplined roadmap of add-ons. Then enable your tax advisors with a playbook — which clients get what, when, and why — so the client experience feels personal while the back office remains efficient.

A practical path to transition

If you’re ready to move from hours to outcomes, you should start with focus and speed. Here are several steps that can help:

      • Choose the right beachhead — Identify one or two services that are ideally suited for ongoing value, such as monthly accounting plus tax, annual planning with quarterly check-ins, or entity support and then package those services into clear tiers.
      • Build the narrative — For each tier, translate features into outcomes. Replace X reconciliations and Y reports with real-time visibility, faster decisions, and fewer surprises. Back this effort with case studies and quantified savings wherever possible.
      • Set billing cadence and service-level agreements — Decide what services can be billed monthly compared to quarterly, define response times and access levels per tier, and codify communication rhythms. Make service levels visible, because again, clients value clarity.
      • Pilot, then expand — Roll out your initial offerings to a defined client segment or cohort. Collect feedback, refine scope, and test pricing elasticity. Use early wins to train your team and inform a broader rollout.
      • Institutionalize benchmarking and reviews — Compare your pricing against peers and alternative service providers at least annually. Review client outcomes quarterly and then adjust tiers, add-ons, and messaging based on what you learn.
      • Equip your team — Give your tax advisors scripts, ROI calculators, and objection-handling guidelines. Realize that confidence is contagious, both internally and externally.

Shifting from selling time to selling outcomes requires more than a new price list. It asks firm leaders to design services intentionally, measure impact consistently, and coach their teams to speak the language of results. It also asks firms to treat pricing as strategy, not administration. Firms should be explicit about what clients they serve, what they promise, and what it’s worth.

The firms that make this shift will do more than improve margins. They’ll build sturdier client relationships, reduce scope creep, and cultivate a culture in which the team understands — and can articulate — the value they create. In a world in which talent is tight and client expectations are rising, that kind of intentional clarity can be a strong competitive advantage.


You can download a full copy of the Thomson Reuters Institute’s recent report on tax firm pricing, Steps for increased confidence in pricing, here

]]>