<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[HRExaminer]]></title><description><![CDATA[A deeper conversation about HRTech, WorkTech, and AI. We look at tech trends, the implications for regulation (or the lack thereof), and try to see around the corner for what's next.  
Click skip to enter. ]]></description><link>https://www.hrexaminer.com</link><generator>Substack</generator><lastBuildDate>Fri, 03 Apr 2026 20:25:10 GMT</lastBuildDate><atom:link href="https://www.hrexaminer.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[HRExaminer, Healdsburg, CA]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[hrexaminer@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[hrexaminer@substack.com]]></itunes:email><itunes:name><![CDATA[John Sumser]]></itunes:name></itunes:owner><itunes:author><![CDATA[John Sumser]]></itunes:author><googleplay:owner><![CDATA[hrexaminer@substack.com]]></googleplay:owner><googleplay:email><![CDATA[hrexaminer@substack.com]]></googleplay:email><googleplay:author><![CDATA[John Sumser]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Trans-formation]]></title><description><![CDATA[by Heather Bussing]]></description><link>https://www.hrexaminer.com/p/trans-formation</link><guid isPermaLink="false">https://www.hrexaminer.com/p/trans-formation</guid><dc:creator><![CDATA[Heather Bussing]]></dc:creator><pubDate>Fri, 27 Mar 2026 18:20:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!DNJ0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b697b50-c38f-4281-8fed-9fce779ec77c_3783x2522.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DNJ0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b697b50-c38f-4281-8fed-9fce779ec77c_3783x2522.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DNJ0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b697b50-c38f-4281-8fed-9fce779ec77c_3783x2522.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DNJ0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b697b50-c38f-4281-8fed-9fce779ec77c_3783x2522.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DNJ0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b697b50-c38f-4281-8fed-9fce779ec77c_3783x2522.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DNJ0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b697b50-c38f-4281-8fed-9fce779ec77c_3783x2522.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DNJ0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b697b50-c38f-4281-8fed-9fce779ec77c_3783x2522.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9b697b50-c38f-4281-8fed-9fce779ec77c_3783x2522.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1297011,&quot;alt&quot;:&quot;Multi colored poppies in a field&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hrexaminer.com/i/192338715?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b697b50-c38f-4281-8fed-9fce779ec77c_3783x2522.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Multi colored poppies in a field" title="Multi colored poppies in a field" srcset="https://substackcdn.com/image/fetch/$s_!DNJ0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b697b50-c38f-4281-8fed-9fce779ec77c_3783x2522.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DNJ0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b697b50-c38f-4281-8fed-9fce779ec77c_3783x2522.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DNJ0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b697b50-c38f-4281-8fed-9fce779ec77c_3783x2522.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DNJ0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b697b50-c38f-4281-8fed-9fce779ec77c_3783x2522.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Gender is construct. Even the same kind of flowers have an infinite variations in color, markings, patterns, and size. Some maybe subtle, others more pronounced. But we would never say that any of them are wrong.</figcaption></figure></div><p>For a long time, I knew almost nothing about transgender people. </p><p>Then one night in my law school class a student told a story about going to the veteran&#8217;s hall to meet some of his buddies he had served in the military with. They were sitting around and talking shit when a tall woman walked in and sat down.</p><p>At first, he wondered what she was doing there. Then she explained that she had served with them in Iraq, but had since transitioned and had always been a woman. My student confused and really upset. He didn&#8217;t understand. There were men and there were women; they weren&#8217;t allowed to switch. </p><p>But then he remembered how she had been there for him during the war. And after listening to her story and what she had been through to get there, it dawned on him that all she wanted was to be herself.</p><p>That&#8217;s it. </p><p>That&#8217;s everything.</p><p>We all just want to be ourselves.</p><p>That story has stayed with me as I continue to learn and work with pay equity and the bigger issues of inclusion and diversity and fundamental fairness. I saw it as a story of changing our minds to accept what was unimaginable. I have so much respect for my student.</p><p>About the same time, I had another student who was transitioning from being a woman to being a man. He was so excited when the hormones kicked in and chin hairs started to sprout. I laughed since I spend way too much time pulling mine out.</p><p>In the past few years, I learned that other friends are exploring the fact that they have always felt like they were in a body that didn&#8217;t match who they are.</p><p>Talking with them, I&#8217;ve had to let go of a lot of beliefs I had about gender. Things I thought I knew, but had never really examined. So while I still have a lot to learn, here are some of the things I&#8217;ve been thinking about and beginning to understand differently. </p><p><strong>People come in lots of genders</strong></p><p>Initially, I thought of transgender people on a spectrum between female and male. Then I had a conversation with a friend who said he thought it was much more like a sphere. You can plot someone&#8217;s gender on the sphere, but it will have many different attributes of our traditional notions of male and female genders. But we all have lots of attributes that don&#8217;t fall into either category and could be either or both. We all know boyish women (I&#8217;m one) and girlish men. In our culture it&#8217;s fine to be the first, but not the second. That&#8217;s not true everywhere. </p><p>Most of us have traits, interests, and attributes that could be associated with a gender stereotype. But the truth is, most of what we think about gender is based on culture, myth, and unfounded assumptions. Labels like male and female are just that. Gender labels are shorthand for an infinite combination of human attributes that are much more interesting when you look at people through the lens of being a person instead of having a gender.</p><p><strong>What&#8217;s in their pants is none of your damn business</strong></p><p>Some trans people change gender physically through hormones and/or surgery. Others never change their bodies, just their presentation in the world. Some you would never know they used to present as a different gender. Some you would never know because they still present as the gender they started with.</p><p>There are humans with penises and breasts, humans with vaginas and beards, and just about every combination you could imagine. How someone decides to handle their gender presentation is a very intimate and personal decision. </p><p>And it&#8217;s none of your damn business, no matter how genuinely caring and curious you are. </p><p><strong>Transgender is not about sexuality</strong></p><p>Trans people have all the sexual preferences other people have. And the truth is, that if the idea of gender is much more fluid than our cultural insistence that you can be one of two, then the whole notion of being homosexual or heterosexual is probably worth reexamining too.</p><p><strong>Transgender is not a mistake</strong></p><p>About <a href="https://www.hrc.org/resources/seven-things-about-transgender-people-that-you-didnt-know">1.6 million people in the US</a> identify as trans. Globally, about 1% of people are trans and an additional 2% are non-binary, gender fluid, or something besides trans, female, or male. And those are the ones willing to reveal themselves. In a 2011 Transgender Discrimination Survey, 71% said they hid their gender to avoid being discriminated against. Or killed.</p><p>Transgender just is. It&#8217;s part of the sphere of gender and there are many variations even among trans people.</p><p>There weren&#8217;t supposed to be such things as black swans. But there are. And they&#8217;re beautiful.</p><p>Variations are wonderful</p><p>Gender is construct. Even the same kind of flowers have an infinite variations in color, markings, patterns, and size. Some maybe subtle, others more pronounced. But we would never say that any of them are wrong.</p><p>It turns out that people are like that too.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/p/trans-formation?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/p/trans-formation?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[What the Court Actually did in Mobley v. Workday]]></title><description><![CDATA[by Heather Bussing]]></description><link>https://www.hrexaminer.com/p/what-the-court-actually-did-in-mobley</link><guid isPermaLink="false">https://www.hrexaminer.com/p/what-the-court-actually-did-in-mobley</guid><dc:creator><![CDATA[Heather Bussing]]></dc:creator><pubDate>Fri, 20 Mar 2026 21:14:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Dxmn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a4e4f9-d80d-4c0e-bd5d-cdf283bfcaf8_1600x1067.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Dxmn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a4e4f9-d80d-4c0e-bd5d-cdf283bfcaf8_1600x1067.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Dxmn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a4e4f9-d80d-4c0e-bd5d-cdf283bfcaf8_1600x1067.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Dxmn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a4e4f9-d80d-4c0e-bd5d-cdf283bfcaf8_1600x1067.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Dxmn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a4e4f9-d80d-4c0e-bd5d-cdf283bfcaf8_1600x1067.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Dxmn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a4e4f9-d80d-4c0e-bd5d-cdf283bfcaf8_1600x1067.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Dxmn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a4e4f9-d80d-4c0e-bd5d-cdf283bfcaf8_1600x1067.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2a4e4f9-d80d-4c0e-bd5d-cdf283bfcaf8_1600x1067.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:278462,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hrexaminer.com/i/191623456?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a4e4f9-d80d-4c0e-bd5d-cdf283bfcaf8_1600x1067.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Dxmn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a4e4f9-d80d-4c0e-bd5d-cdf283bfcaf8_1600x1067.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Dxmn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a4e4f9-d80d-4c0e-bd5d-cdf283bfcaf8_1600x1067.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Dxmn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a4e4f9-d80d-4c0e-bd5d-cdf283bfcaf8_1600x1067.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Dxmn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a4e4f9-d80d-4c0e-bd5d-cdf283bfcaf8_1600x1067.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo of one of the courtrooms at the Ninth Circuit Court of Appeals in San Francisco.</figcaption></figure></div><p>I&#8217;ve been reading articles about the trial court&#8217;s March 6, 2026, order in Mobley v. Workday and they don&#8217;t make sense to me. So I pulled <a href="https://s3.documentcloud.org/documents/27781349/us-dis-cand-3-23cv770-d24320156e190-order-granting-motion-for-leave-to-file-amicus-bri.pdf">the order</a> and here&#8217;s exactly what happened. The TLDR is: not much, carry on.</p><p>But for my fellow employment law nerds, here&#8217;s the story and analysis. </p><p><strong>The motion</strong></p><p>Workday moved to dismiss portions of the latest complaint by Mr. Mobley and some additional Plaintiffs that were added to the Complaint.</p><p>A motion to dismiss (federal court) or a demurrer (state court) is basically a legal &#8220;So what?&#8221; The defendant says, even if everything in the complaint is true, the Plaintiff still doesn&#8217;t have a claim because there&#8217;s a legal reason why they can&#8217;t make this claim in this case. The idea is to avoid factual issues that require hearings and evidence and just look at the viability of the claim from a purely legal perspective. These motions happen at the beginning of the case to get rid of the claims that won&#8217;t work. </p><p>Theoretically, it saves everyone time and money. But sometimes it&#8217;s a merry go round of complaints, challenges, amended complaints, more challenges, lather, rinse, repeat. This case is on the merry go round track.</p><p>Workday had a few arguments, most of them not that great. One kinda worked, but not really. Here were the issues.</p><p><strong>Can job applicants bring disparate impact cases for age discrimination?</strong></p><p>First, Workday claimed that the Age Discrimination in Employment Act did not apply to disparate impact claims by job applicants. A disparate impact case is when a policy or practice of the employer adversely affects a protected group.</p><p>Workday argued that Congress considered amending the ADEA to specifically say that it applied to disparate impact claims by job applicants, but the amendment didn&#8217;t pass, so the ADEA doesn&#8217;t apply in this situation. The court didn&#8217;t buy it because, well, if you can have hiring policies that discriminate against protected classes and nobody can challenge it, it pretty much undermines the whole point of discrimination law. And this is Workday, who claims to be an expert in HR. Sigh.</p><p>The other problem was that there are lots of cases, regulations, and courts that have determined that job applicants <strong>can </strong>bring disparate impact cases for age discrimination. They&#8217;ve been doing it for years. So it came down to speculation about what Congress was doing versus decades of legal analysis by courts and the EEOC. </p><p>Next Workday tried arguing that the court doesn&#8217;t have to listen to the EEOC anymore because the Supreme Court changed all that in 2024 when it decided <em><a href="https://www.supremecourt.gov/opinions/23pdf/22-451_7m58.pdf">Loper Bright</a></em>.  Except <em>Loper Bright </em>says courts don&#8217;t always have to defer to agency interpretations of law; it doesn&#8217;t say they can&#8217;t. After all, sometimes agencies do get it right. It would be silly to have a rule that says courts can&#8217;t ever listen to the agencies whose job it is to interpret and implement statutes. </p><p>So, neither of those arguments worked. Of course they didn&#8217;t work. That&#8217;s the part that confused me. Why is Workday, who is represented by a very good law firm, making sketchy arguments and wasting everyone&#8217;s time?</p><p>It&#8217;s hard to say. My guess is that it&#8217;s Workday&#8217;s strategy to fight everything and delay as much as possible. </p><p>Sometimes motions and delay is partly about putting economic pressure on the Plaintiffs who generally have fewer resources. But this is a landmark case with tons of firms on the Plaintiffs&#8217; side. They are in it to make new law and will do what it takes.</p><p>Mostly I think Workday is trying to narrow the issues down so that they can figure out how to settle this thing before it becomes actual law that will harm the business.</p><p>Or maybe they are planning on taking this to the Supreme Court where law doesn't matter as much anymore.</p><p><strong>Can non-California Plaintiffs bring state discrimination claims under California law?</strong></p><p>Some of the Plaintiffs in the case are not California residents but have alleged claims under California&#8217;s anti-discrimination laws. Sometimes this matters; other times it doesn&#8217;t. It depends on the facts. But here, Workday said there aren&#8217;t any facts that tie these people to California so they should not be able to bring claims under state law.</p><p>The court agreed. But the Plaintiffs argued that they could allege facts that would make a difference in the analysis and court said they could amend the complaint to allege those facts. </p><p>Although, Workday technically won this argument, the Plaintiffs will just amend the complaint and it won&#8217;t matter.</p><p><strong>Can a job applicant allege disability claims for cancer and asthma for being rejected by an ATS?</strong></p><p>This is the one issue that&#8217;s pretty interesting. It&#8217;s possible to extract a lot of information about someone from a job application that gives their address, zip code, what schools they attended, and all the places they worked. You can make pretty good guesses about race, age, gender, whether they have kids, and a lot of other stuff. (See People Analytics.)</p><p>The Plaintiffs claimed that Workday&#8217;s AI screening tools included &#8220;assessments and personality tests&#8221; that could also reveal disabilities and potential mental issues like &#8220;anxiety&#8221; or &#8220;depression.&#8221; </p><p>It&#8217;s a good argument. But it&#8217;s doubtful that screening tools for job applicants are going to reveal that someone has cancer or asthma. And there weren&#8217;t any allegations in the complaint that the AI screening tools could determine that particular Plaintiff&#8217;s disabilities in this case. </p><p>So the court got rid of the disability claim for the Plaintiff who has cancer and asthma because even AI can&#8217;t figure that out. Yet.</p><p>This is going to be an argument we see a lot&#8212;that AI can extract information from the data it has, combine it with other information it has, and can discriminate based on stuff a job applicant never said. It didn&#8217;t work this time, but it will with different facts.</p><p><strong>What&#8217;s next?</strong></p><p>There aren&#8217;t any big takeaways from this motion. The Plaintiffs will amend the complaint. Again. They&#8217;re still working on whether it will be a class action. And the whole thing will drag on for months, maybe years.</p><p>In the meantime, it&#8217;s worth looking at what data from job applications and assessments is being used by tech tools, what data it gets combined with, and whether those processes have an impact on who does and doesn&#8217;t get hired.</p><p>Of course, the problem with looking is that you may also find problems. But it&#8217;s better to discover problems and address them than end up in a lawsuit like this. You will have less discrimination and fewer ginormous legal bills. </p><p>So monitor and audit your hiring outcomes by protected classes. If you&#8217;re worried, get your friendly employment lawyer involved to help you solve any issues and protect the information.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/p/what-the-court-actually-did-in-mobley?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/p/what-the-court-actually-did-in-mobley?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[A Zen Failure]]></title><description><![CDATA[On the passing of David Chadwick,]]></description><link>https://www.hrexaminer.com/p/a-zen-failure</link><guid isPermaLink="false">https://www.hrexaminer.com/p/a-zen-failure</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Fri, 20 Mar 2026 12:49:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!chgm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7d4667d-7ab2-4254-9e97-6e39f541d882_6000x4000.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!chgm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7d4667d-7ab2-4254-9e97-6e39f541d882_6000x4000.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!chgm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7d4667d-7ab2-4254-9e97-6e39f541d882_6000x4000.heic 424w, https://substackcdn.com/image/fetch/$s_!chgm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7d4667d-7ab2-4254-9e97-6e39f541d882_6000x4000.heic 848w, https://substackcdn.com/image/fetch/$s_!chgm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7d4667d-7ab2-4254-9e97-6e39f541d882_6000x4000.heic 1272w, https://substackcdn.com/image/fetch/$s_!chgm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7d4667d-7ab2-4254-9e97-6e39f541d882_6000x4000.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!chgm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7d4667d-7ab2-4254-9e97-6e39f541d882_6000x4000.heic" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f7d4667d-7ab2-4254-9e97-6e39f541d882_6000x4000.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2367353,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hrexaminer.com/i/191519221?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7d4667d-7ab2-4254-9e97-6e39f541d882_6000x4000.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!chgm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7d4667d-7ab2-4254-9e97-6e39f541d882_6000x4000.heic 424w, https://substackcdn.com/image/fetch/$s_!chgm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7d4667d-7ab2-4254-9e97-6e39f541d882_6000x4000.heic 848w, https://substackcdn.com/image/fetch/$s_!chgm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7d4667d-7ab2-4254-9e97-6e39f541d882_6000x4000.heic 1272w, https://substackcdn.com/image/fetch/$s_!chgm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7d4667d-7ab2-4254-9e97-6e39f541d882_6000x4000.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This morning, in a phone call with my daughter, I learned that <a href="https://www.cuke.com/dchad/dc-memorial.htm">David Chadwick</a> died. He was one of those larger than life beings. His  history straddled Ft Worth Texas, the civil rights movement, LSD in the 60s, life in Mexico, Japan, and Bali, the evolution of Zen in America, and a huge network of friends and associates.</p><p>When I first arrived in California, it was to run a non-profit housed on the docks in Sausalito. As a way of building a community, I started offering lunch to &#8216;friends and family.&#8217; David started showing up. He knew Sausalito like the back of his hand. He&#8217;d been in and out of all of the (many) recording studios in the flat part of town. He knew the history of the musicians, their entourages, their foibles, and their tastes.</p><p>As I got to know him, we began to discuss Zen.</p><p>I&#8217;ve been a more or less a Zen student all my life. It began with reading Alan Watts&#8217; book, &#8220;<a href="https://www.goodreads.com/book/show/60551.The_Book">The Book: On the taboo against knowing who you are</a>&#8221; in one of the bookstores I ran. On my first low rent camp across the country road trip, I stumbled into a bookstore in Santa Fe and discovered a copy of &#8220;<a href="https://en.wikipedia.org/wiki/Zen_Flesh,_Zen_Bones">Zen Flesh, Zen Bones</a>&#8221;, a compact volume covering the primary texts of one of the Zen schools.</p><p>With David as a tutor, I learned the Chadwick approach. It looked a lot like intentional bumbling. He always found the least desirable chore, jumped on it, and ingratiated himself. Never seeking the limelight, he assessed what needed to be done and then made sure it got done.</p><p>My favorite of his books is called &#8220;<a href="https://books.google.com/books/about/Thank_You_and_OK.html?id=5HwSAgAAQBAJ">Thank You and Okay: Diary of an American Zen Failure in Japan</a>&#8221;. (If you find an early edition, my review is one of the blurbs on the cover.) The book is a lighthearted romp through David&#8217;s adventures in Japan. Unlike many books on Zen (which paint a picture of perfect enlightenment), this story is about the nitty gritty details of bumbling through cultural barriers while hunting for the roots of Zen.</p><p>That was a lot like the man himself. His take on zen was, that it was a way to navigate the messiness of life. More than anyone I&#8217;ve met, his big heart had room and compassion for all sorts of people of all sorts of stripes. He was particularly fond of the class of us who lived on the margins: dreamers, gamblers, musicians, addicts, Zen priests, cult leaders, fallen saints. His world had room for the non-ideal world that we actually live in.</p><p>Early on in our relationship, I got fired from my dream job. I was broken, humiliated, scared, and afraid. At the time, he was writing his biography of Sunru Suzuki, the Zen master who brought Zen to America. Having been a long-time student, David knew all the stories, collected all of the lectures, and assembled a book called &#8220;Crooked Cucumber.&#8221; It was a big project.</p><p>He scooped me up, put me in the passenger seat of the beater he was currently driving, and took me on a tour of all the Zen groups and Zen masters on the west coast. He (with me watching) interviewed every one of them. The trip took weeks. I healed in his car. I learned that people who become Zen priests are profoundly human and full of idiosyncrasies. They had girlfriends, mistresses, big egos, small egos, brilliance, dullness, chemical problems, divorces, and all the gnarly things that really make us human.</p><p>Over the last thirty years we were sometimes close and sometimes far apart. He moved to Bali in his early 70s and made it his home. I lived in the barn he called home in Santa Rosa for a time. He was one of the few people able to navigate both sides of the intensity of my own divorce years later. He helped me get started cooking muffins on Sunday mornings (very early) at Green Gulch Zen Center. We found a lot in common at the Pacific Zen Institute (where I met Heather, my wife.)</p><p>More than anything, David helped me along the journey of accepting myself, my failures, my odd sensibilities, my nagging guilt. He had very little room for judging others and was content to accept himself as a flawed human being. And, he never ceased to surprise.</p><p>I&#8217;ll miss him.</p><p>Photo by <a href="https://unsplash.com/@nu_panuson?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Panuson Norkaew</a> on <a href="https://unsplash.com/photos/birds-eye-view-of-forest-mountain-3ot8O0t5beE?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>]]></content:encoded></item><item><title><![CDATA[Brussel Sprouts]]></title><description><![CDATA[Or How a bit of arrogance prompted the development of a leadership framework]]></description><link>https://www.hrexaminer.com/p/brussel-sprouts</link><guid isPermaLink="false">https://www.hrexaminer.com/p/brussel-sprouts</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Thu, 19 Mar 2026 13:10:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!swKS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9fa310ce-62bb-4c6c-8f6d-b77fc7e9d169_4592x3448.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!swKS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9fa310ce-62bb-4c6c-8f6d-b77fc7e9d169_4592x3448.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!swKS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9fa310ce-62bb-4c6c-8f6d-b77fc7e9d169_4592x3448.heic 424w, https://substackcdn.com/image/fetch/$s_!swKS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9fa310ce-62bb-4c6c-8f6d-b77fc7e9d169_4592x3448.heic 848w, https://substackcdn.com/image/fetch/$s_!swKS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9fa310ce-62bb-4c6c-8f6d-b77fc7e9d169_4592x3448.heic 1272w, https://substackcdn.com/image/fetch/$s_!swKS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9fa310ce-62bb-4c6c-8f6d-b77fc7e9d169_4592x3448.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!swKS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9fa310ce-62bb-4c6c-8f6d-b77fc7e9d169_4592x3448.heic" width="1456" height="1093" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9fa310ce-62bb-4c6c-8f6d-b77fc7e9d169_4592x3448.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1093,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2466218,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hrexaminer.com/i/191419757?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9fa310ce-62bb-4c6c-8f6d-b77fc7e9d169_4592x3448.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!swKS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9fa310ce-62bb-4c6c-8f6d-b77fc7e9d169_4592x3448.heic 424w, https://substackcdn.com/image/fetch/$s_!swKS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9fa310ce-62bb-4c6c-8f6d-b77fc7e9d169_4592x3448.heic 848w, https://substackcdn.com/image/fetch/$s_!swKS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9fa310ce-62bb-4c6c-8f6d-b77fc7e9d169_4592x3448.heic 1272w, https://substackcdn.com/image/fetch/$s_!swKS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9fa310ce-62bb-4c6c-8f6d-b77fc7e9d169_4592x3448.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>These days, I find myself wandering down rabbit holes in the early morning hours. There&#8217;s something about working with a partner who is always available and willing to go down there with me.</p><p>My use of LLMs feels so much like a superpower that I want to run out and buy some tights, a cape,  and some brightly colored underpants. With the LLM as my mental exoskeleton, I get to explore the inner reaches of ideas. This morning, I focused on leadership.</p><p>The prompt? I have a very good friend who is a PhD candidate. We have fantastic conversations about the content and structure of leadership. We started a long time ago with a probing question about the relationship between fear and integrity. The TL:DR is that courage is largely about maintaining your integrity as risk increases.</p><p>The other day, he sent me a short clip of a piece of an interview with Marc Andressen, the founder of Andressen-Horrowitz, the massive Silicon Valley Investment firm. Andressen made his money by converting the CERN originated web browser into Netscape. He&#8217;s had an incredible ride and is worth billions.</p><p>In the <a href="https://www.threads.com/@perfectunion/post/DV88utRAVov?xmt=AQF0CFayQappeb_1VOTFe07tMF__M3ZKtJwznvIrrG-JtWPN1ErU53Md4QxM5b_9Z0wpM9Q&amp;slof=1">interview</a>, he bragged that he had no introspection. He described introspection as a guilt derived boondoggle for thumb suckers. He claimed that his lack of self-awareness made it possible to be always moving forward and never looking back.</p><p>He was bragging about being a sociopath. And, the truth is that there is a correlation between sociopathy and CEO success. Just ask Steve Jobs, Elon Musk, Elizabeth Holmes, or Travis Kalanick.</p><p>I love to question the places in my life where I experience disgust. That&#8217;s how I learned to love Brussel sprouts. I enjoy investigating those places where I have a strong, negative emotional response.</p><p>My immediate response to the Andressen interview was to think, &#8216;Oh, one more guy who thinks that being rich makes him a great man and an authority on what&#8217;s smart.&#8217; I threw a little bit of the current anti-billionaire fervor. I even added a strong subtext of envy.</p><p>This is all for a fellow who happened to be in the right place at the right time and recognized/commercialized someone else&#8217;s work. It&#8217;s a pattern in Silicon Valley&#8217;s wealth creation playbook. I tickles that part of me that wishes I&#8217;d been clever enough to be lucky.</p><p>So, I engaged Claude, my current late night sparring partner, in a conversation about leadership. As is often the case in my interactions, Claude had challenges keeping up with the question flow. We pushed back and forth to create an idea. Like it usually does, the finished product emerged as the completion of dozens of rounds.</p><p>(One of the ways I tell if a question is good is if the LLM quits in the middle of the answer. I particularly enjoy trying to find the shortest question that causes the most work. In spite of that tendency, Claude and I came up with an interesting idea.)</p><p>In the end, we developed.a spectral view of leadership. In this framework, there are 10 pairs of complementary opposite values/behaviors. Where I thought that Andressens assertions were way over the top, I landed understanding that the key to leadership is knowing how to weave a path through those opposing values.</p><p><a href="https://claude.ai/public/artifacts/f9b575e6-f966-4385-a819-89932ffa4e95">Take a look</a>..</p><p>Photo by <a href="https://unsplash.com/@mattseymour?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Matt Seymour</a> on <a href="https://unsplash.com/photos/green-and-yellow-round-fruits-d38eUHPC4cQ?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>]]></content:encoded></item><item><title><![CDATA[How to keep going]]></title><description><![CDATA[I don't like this. And I'm okay. Mostly.]]></description><link>https://www.hrexaminer.com/p/how-to-keep-going</link><guid isPermaLink="false">https://www.hrexaminer.com/p/how-to-keep-going</guid><dc:creator><![CDATA[Heather Bussing]]></dc:creator><pubDate>Fri, 13 Mar 2026 21:17:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!4G8y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa266bf-97c9-45aa-a62c-81860cdc6bba_3072x4080.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4G8y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa266bf-97c9-45aa-a62c-81860cdc6bba_3072x4080.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4G8y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa266bf-97c9-45aa-a62c-81860cdc6bba_3072x4080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4G8y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa266bf-97c9-45aa-a62c-81860cdc6bba_3072x4080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4G8y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa266bf-97c9-45aa-a62c-81860cdc6bba_3072x4080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4G8y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa266bf-97c9-45aa-a62c-81860cdc6bba_3072x4080.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4G8y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa266bf-97c9-45aa-a62c-81860cdc6bba_3072x4080.jpeg" width="1456" height="1934" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2aa266bf-97c9-45aa-a62c-81860cdc6bba_3072x4080.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1934,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2743421,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hrexaminer.com/i/190884718?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa266bf-97c9-45aa-a62c-81860cdc6bba_3072x4080.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4G8y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa266bf-97c9-45aa-a62c-81860cdc6bba_3072x4080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4G8y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa266bf-97c9-45aa-a62c-81860cdc6bba_3072x4080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4G8y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa266bf-97c9-45aa-a62c-81860cdc6bba_3072x4080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4G8y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2aa266bf-97c9-45aa-a62c-81860cdc6bba_3072x4080.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I don&#8217;t like what&#8217;s happening in the world. I am so tired. </p><p>I&#8217;m also okay. Mostly. Sometimes.</p><p>Here are some of the things I&#8217;m doing and trying to stay sane in a reality that feels unreal and awful.</p><p><strong>Honor the struggle</strong></p><p>There is a lot going on that is violent and harmful to people. And none of it makes sense to me. I would really like it to go away. I would like it to stop. But even if that&#8217;s possible, it will take time.</p><p>For now, I need to start where I am, which is feeling angry, stuck, horror, disgust, and very, very sad. </p><p>I have dealt with narcissists and sociopaths my entire life and career. I recognize them and understand what they are capable of. I know they do not care about the consequences of their actions for anyone but themselves. They only care about what they want and will use any means to get it, regardless of law, economics, practicalities, logic, common sense, or the destruction of anyone or anything. Some of them even enjoy hurting people because it makes them feel strong and powerful. I&#8217;m not surprised when they do things that are harmful, even this. </p><p>But I was not prepared for the grief.</p><p>The grief is thick, heavy, and paralyzing sometimes. Our government is intentionally hurting and killing people and destroying homes, families, and livelihoods. I know people who are living in terror. Some days, it&#8217;s me.</p><p>I miss being able to rely on the Constitution and laws. I miss governments, courts, and citizens following the law and court orders. </p><p>Even though I have some understanding of why it is happening, I&#8217;m devastated that so many people are going along with it. I&#8217;m not sure I will ever understand that.</p><p>What I need to remember is that I am having a perfectly normal response to abnormal situations. I&#8217;m not crazy. But the world is.</p><p>Still, I have to keep going and do the best I can with what I have in the moment. I know how to do that&#8212;even when the moment involves crawling into bed and having a good cry.</p><p><strong>Have mixed feelings</strong></p><p>I wasn&#8217;t allowed to have negative feelings as a kid. I was expected to show up, clean, cheerful, and cooperative. If I was angry, sad, disappointed, or anything that could possibly be interpreted as unpleasant, I was sent to my room until I &#8220;could put a smile on my face.&#8221;</p><p>It&#8217;s a good thing I&#8217;m an introvert. I was perfectly content to read a book in bed. But it took me a long to time to see that I liked it and was happiest alone in my room. I couldn&#8217;t see past the pain of the rejection and shaming that put me there. </p><p>Eventually (I had a lot of time), I realized I could be sad and happy at the same time. I could be furious and still laugh. I could be devastated and still find gratitude for beauty and wonder and curiosity.</p><p>With some attention to all the things I&#8217;m feeling, it turns out I usually have mixed feelings about almost everything all the time. So I invite all those mixed feelings to come along for the ride. But I don&#8217;t have to let them drive. I can choose where to focus my energy.</p><p>I&#8217;m not stuck and I don&#8217;t have to wait until the grief goes away to find peace and enjoy things today. I&#8217;m going to start with some chocolate.</p><p><strong>Medication</strong></p><p>Anti-depressants have saved my life, probably more than once. I haven&#8217;t been on them for a few years. Then a couple months ago, I started a new migraine medicine that also treats anxiety and depression. It turns out I had dramatically underestimated my levels of anxiety and depression.</p><p>Holy wow. I feel better. It&#8217;s partly because I don&#8217;t have a splitting migraine. But a lot of it is also because my brain chemistry is in much better shape. </p><p>If you&#8217;re feeling stuck and can&#8217;t seem to get out of the mental and emotional muck, consider talking to your doctor about medication.</p><p><strong>Go outside and play</strong></p><p>I get it. You are a grown up. You are very busy and have responsibilities. People depend on you and there is never enough time to get it all done.</p><p>It&#8217;s all true. And you still need to go outside and play, preferably every day. </p><p>The light and air feel good. Seeing big skies and water, or tall trees, or beautiful flowers fills my soul. </p><p>I remember that fish, and finches and fawns do not care about my little human and governmental dramas. It&#8217;s a relief to get out of my head and into the world where I have a chance of seeing things in their right size.</p><p><strong>Eat green stuff</strong></p><p>I had ice cream for breakfast, which was wonderful. But I&#8217;m also making sure I eat at least some food that came from the ground. I also need regular bits of protein. When I&#8217;m having a hard time, I have to spend some extra effort making sure my body has what it needs. </p><p>If I don&#8217;t, I get cranky and can&#8217;t think. When I can&#8217;t think, I get scared. This sets off adrenaline and cortisol cascades that make me feel even worse and eventually break stuff&#8212;like everything. </p><p>In recovery I learned HALT. It stands for hungry, angry, lonely, tired. When I don&#8217;t like the way I feel, I need to check and fix those things first, then reassess. Maybe the world&#8217;s not ending; maybe I just need a sandwich.</p><p><strong>Sofalism</strong></p><p>In college I invented a religion called Sofalism. In Sofalism, the highest good is to remain horizontal as much as possible. (Sofa optional.) The rest is up to you.</p><p>The <em><strong>rest</strong></em> is up to you.</p><p>In order to function, I need sufficient down time, sleep, and alone time. They are not nice to haves; they are need to haves. I need to be a human being instead of a human doing.</p><p>So try Sofalism. You&#8217;re not being lazy; it&#8217;s a spiritual practice.</p><p><strong>Manageable bite-sized pieces</strong></p><p>I survive and thrive on lists. If it&#8217;s written down on the list, then the list can hold it and I don&#8217;t have to. I can stop remembering I need to do it until I&#8217;m ready to deal with it. This is freeing.</p><p>Usually, I can pick up a list and start somewhere. But not always. Sometimes it&#8217;s all too much, too big, too difficult, or too annoying. Then it&#8217;s time to break things down into manageable bite-sized pieces until there is something I know I can do.</p><p>If I can&#8217;t write an article, maybe I can write a couple sentences about how to keep going. If the project is too daunting, I can almost always find a smaller piece that isn&#8217;t. </p><p>I don&#8217;t need to wait for everything to be perfect to start somewhere. I don&#8217;t need to do everything all at once. I&#8217;m just going to do this little bit. Then maybe another.</p><p><strong>Help others</strong></p><p>One of the most powerful things we can do is to be kind, show compassion, care, and help others. </p><p>Even if it&#8217;s returning the grocery cart or picking up dog poop that&#8217;s not your dog&#8217;s. Open up to world and leave your spot at the center of it for a little while.</p><p>Understanding, caring, and helping each other is how we&#8217;re going to keep going. We will help each other through this.</p><p>-hb</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/p/how-to-keep-going?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/p/how-to-keep-going?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Mostly Men]]></title><description><![CDATA[by Heather Bussing]]></description><link>https://www.hrexaminer.com/p/the-mostly-men</link><guid isPermaLink="false">https://www.hrexaminer.com/p/the-mostly-men</guid><dc:creator><![CDATA[Heather Bussing]]></dc:creator><pubDate>Mon, 23 Feb 2026 21:06:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!C6gi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65ac965d-a1d9-464a-b7db-b1d357f3841d_3666x2749.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!C6gi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65ac965d-a1d9-464a-b7db-b1d357f3841d_3666x2749.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!C6gi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65ac965d-a1d9-464a-b7db-b1d357f3841d_3666x2749.jpeg 424w, https://substackcdn.com/image/fetch/$s_!C6gi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65ac965d-a1d9-464a-b7db-b1d357f3841d_3666x2749.jpeg 848w, https://substackcdn.com/image/fetch/$s_!C6gi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65ac965d-a1d9-464a-b7db-b1d357f3841d_3666x2749.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!C6gi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65ac965d-a1d9-464a-b7db-b1d357f3841d_3666x2749.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!C6gi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65ac965d-a1d9-464a-b7db-b1d357f3841d_3666x2749.jpeg" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/65ac965d-a1d9-464a-b7db-b1d357f3841d_3666x2749.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1987961,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hrexaminer.com/i/188947080?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65ac965d-a1d9-464a-b7db-b1d357f3841d_3666x2749.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!C6gi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65ac965d-a1d9-464a-b7db-b1d357f3841d_3666x2749.jpeg 424w, https://substackcdn.com/image/fetch/$s_!C6gi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65ac965d-a1d9-464a-b7db-b1d357f3841d_3666x2749.jpeg 848w, https://substackcdn.com/image/fetch/$s_!C6gi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65ac965d-a1d9-464a-b7db-b1d357f3841d_3666x2749.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!C6gi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65ac965d-a1d9-464a-b7db-b1d357f3841d_3666x2749.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>There is a group of mostly straight, mostly white, mostly guys with piles of money who have convinced themselves that they are special, always right, and should therefore be in charge of everything. Apparently, the piles of money are proof the Mostly Men are special and therefore, right. We are not allowed to notice inheritance, luck, or the fact that their money primarily comes from making money on money and they didn&#8217;t really do anything. We are also supposed to ignore their circular reasoning.</p><p>This has been going on since, well, humans and money&#8212;basically forever. It&#8217;s not new or special. </p><p>It&#8217;s also not really about money. It&#8217;s about power. </p><p>To Mostly Men, having power means that they get to make the rules. The first rule is that the rules don&#8217;t apply to them. That&#8217;s because they&#8217;re special. They are also completely immune to facts, evidence, criticism, and accountability. Why? Because if there&#8217;s a potential consequence to something they&#8217;ve done, they write a check and move on. It&#8217;s annoying and inconvenient to them, but not enough to change their behavior. </p><p>Our systems and culture are designed to encourage and reward Mostly Men. Of course they are; Mostly Men are the ones making the rules.</p><p>The current problem is that the Mostly Men in power right now are not satisfied with power that comes from money. They also like to demonstrate their power by hurting people. They started out kicking puppies and putting lit firecrackers in frog butts. And here we are.</p><p>Our courts are corrupt because judges felt like they didn&#8217;t have enough money to feel special. Our legislatures are corrupt because money buys offices and once you have one, more money rolls in. Our law enforcement is corrupt because they are full of Mostly Men who get off on having power, even if they need a gun for it. And if they have a private island, Mostly Men think they can do anything they want there, including raping and hurting children. Anywhere there is power, money, and influence, you will find corruption and harm to others.</p><p>If you don&#8217;t look closely, it would be easy to think that our society cares far more about money than people. The Mostly Men like it that way. Because that&#8217;s how they see things and they enjoy it when anything confirms their POV. But it&#8217;s not real.</p><p>When we get through this, and we will, we need to make sure that people are more important than money and that Mostly Men are accountable for their actions. And there&#8217;s a lot of reimagining of power that needs to happen in our legal system, our organizations and institutions, and in our governments.</p><p>In the meantime, the rest of us can start to see what Mostly Men really are. They&#8217;re weenies. They are too scared and insecure to manage without the trappings of power or money. They fundamentally don&#8217;t believe any of the lies they tell themselves and each other. They really want to, but they know we see them for who they really are. </p><p>Only weenies need to abuse their power to prove to themselves that they have it. </p><p>Things feel scary right now. This is an accurate assessment of where we are. When weenies get cornered, they get violent.</p><p>But this is also an inflection point where people with power to stop the violence, harm, and injustice are starting to push back. And those people include you and me. We don&#8217;t have to agree on everything to do that. We just have to agree that everyone matters.</p><p>The only requirement to be in this world, is that we are here. We didn&#8217;t create or ask for the circumstances that we are born into. We all deserve dignity and to be treated humanely. This is not that hard. </p><p>No matter who you are, what you believe, or even how you voted, we can agree on the importance of compassion for each other. And we can condemn harming people for who they are, where they came from, or what they look like.</p><p>Compassion, kindness, and love are more powerful. No one can take those things away from us. Let&#8217;s use them together to save our world and each other.</p><p> </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/p/the-mostly-men?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/p/the-mostly-men?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[HREX Podcast v1.11 Jeremy Roberts]]></title><description><![CDATA[Sourcing Veteran on the Ins and Outs of AI and Automation]]></description><link>https://www.hrexaminer.com/p/hrex-podcast-v111-jeremy-roberts</link><guid isPermaLink="false">https://www.hrexaminer.com/p/hrex-podcast-v111-jeremy-roberts</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Sun, 22 Feb 2026 13:15:26 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/187669472/a780c9b3ebd41c8a07b9e36b54899469.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>Summary</strong></p><p>In this episode of the HR Examiner Podcast, John Sumser speaks with Jeremy Roberts about his extensive experience in the recruitment technology space, particularly focusing on the implications of AI in recruitment. They discuss the evolution of recruiting technology, the challenges and concerns surrounding AI, the importance of governance and compliance, and the future of AI in recruitment. Jeremy shares insights on how to navigate the changing landscape of employment and offers advice for job seekers in a competitive market.</p><p><strong>Takeaways</strong></p><ul><li><p>Jeremy Roberts has a long history in recruitment technology.</p></li><li><p>AI is transforming the recruitment landscape but poses challenges.</p></li><li><p>Governance and compliance are critical in AI implementation.</p></li><li><p>Concerns about bias in AI models are prevalent.</p></li><li><p>The future of work will require new skills and adaptability.</p></li><li><p>Education systems need to evolve to teach critical thinking in an AI world.</p></li><li><p>Recruiters must be aware of the implications of using AI.</p></li><li><p>Security concerns with AI systems are significant.</p></li><li><p>Understanding AI models and their biases is essential for effective use.</p></li><li><p>Job seekers should focus on solving problems rather than just looking for jobs.</p></li></ul><p><strong>Titles</strong></p><ul><li><p>Navigating the AI Revolution in Recruitment</p></li><li><p>The Future of Work: AI and Employment</p></li></ul><p><strong>Sound Bites</strong></p><ul><li><p>&#8220;How do we all stay employed?&#8221;</p></li><li><p>&#8220;The AI promise isn&#8217;t being met.&#8221;</p></li><li><p>&#8220;We need to apply constraints to AI.&#8221;</p></li></ul><p>Chapters/Timeline</p><p>00:00 Introduction to Jeremy Roberts and His Journey</p><p>03:38 The Evolution of Recruiting Technology</p><p>07:05 Concerns About AI in the Workforce</p><p>10:28 The Risks of Automation in Recruiting</p><p>14:25 The Future of Recruiting and AI Security</p><p>20:19 AI in Recruitment: Addressing Security Concerns</p><p>21:19 Preparing for AI Implementation in HR Tech</p><p>21:51 Bias in AI: Lessons from the Past</p><p>22:36 Understanding AI Models and Their Data</p><p>24:46 The Role of Large Language Models in HR Tech</p><p>26:28 Human Oversight in AI Decision-Making</p><p>27:27 Balancing Learning and Bias in AI</p><p>28:17 The Importance of Governance in AI Implementation</p><p>31:43 Dynamic Governance for AI Systems</p><p>35:00 Questions Buyers Should Ask AI Vendors</p><p>37:15 Liability in AI: Who is Responsible?</p><p>39:53 Navigating Job Market Challenges with Consulting</p><p><strong>Transcript</strong><br>John Sumser (00:00)</p><p>Hi, I&#8217;m John Silmser and this is the HR Examiner Podcast. Today we&#8217;re going to be spending time with Jeremy Roberts who has gone from being a new guy on the scene to one of the aged, grey-bearded veterans of the sourcing universe, the recruiting universe. He&#8217;s been everywhere from the of the theoretical heart of it in the early days of SourceCon to...</p><p>working in a number of fascinating AI startups over the years. And he is currently housed in a company called Tenzo.ai where he is working out the chinks of sourcing using AI. And so we&#8217;re going to talk about some AI today. Hi Jeremy.</p><p>Jeremy Roberts (00:43)</p><p>Hello, good to be here.</p><p>John Sumser (00:45)</p><p>Did I miss anything in the introduction?</p><p>Jeremy Roberts (00:47)</p><p>No, I mean, that&#8217;s pretty much it. guess the way, before we go too deep, I like to really own who I am. Self-awareness is important. I, like you said, I&#8217;m a practitioner since about 2001 in recruiting, quickly gravitated to the sourcing and candidate identification side.</p><p>was an editor of source con and then, for a few years and then went into tech. my last job as an employee in a corporate recruiting department, I had a team of a hundred for a large bank. And so that was, an amazing experience. And then I left there in March. I, well in March, I was notified that I had 60 days and my position was eliminated. I could apply for HR jobs internally and.</p><p>the other option was a severance package and I&#8217;m, I&#8217;m not much of an HR person. I&#8217;m, I really love recruiting and, and, talent acquisition. And so I really leaned into that. I, I saw that the market was a little bit crazy. and I, I didn&#8217;t want to sit around and wait for a job. So I created a consulting firm called hyzer talent. and my first.</p><p>client, actually I had three clients throughout the summer. My first client hired me to vet all of the AI interview solutions on the market. And so I spent a few months doing that and just fell in love with it and then was really fascinated by the things that are happening in the market. And so at the end of that engagement, I actually asked our CEO at Tenzo for a job. So</p><p>I told him, was like, I&#8217;ve started customer success functions at recruiting tech companies in the past, and I&#8217;d love to join you. so here I am now. So, so yeah, it&#8217;s been, it&#8217;s been a journey, but all that to say, I am a practitioner first, and I love technology, but I don&#8217;t create models. don&#8217;t, I&#8217;m not a developer. I&#8217;m very much,</p><p>you know, a member of the talent acquisition community. I, my favorite thing to do is listen to what talent acquisition practitioners are talking about and then get as close to the tech as I can so that I can translate. So, you know, I&#8217;ll, I&#8217;ll hear, I&#8217;ll hear things and conferences with clients. And then my, my job, I want to get as deep into the tech as the non-technical brain can, you know, and then be kind of that, that translator. And then,</p><p>You know, I&#8217;ve worked at some highly regulated companies, Raytheon, Honeywell, JP Morgan Chase, right? And so I get HR compliance and kind of all the guardrails that we need to have in place. So I like to understand the problems, talent acquisition, people are trying to solve the technology we&#8217;re using for it. And then, you know, bring in kind of recruiting and HR and legal best practices into the mix. So the intersection of all of that is where I try to.</p><p>try to stay.</p><p>John Sumser (03:38)</p><p>So before we go on, we were talking about the conversation we had in the late teens or 2020. Why don&#8217;t you tell that story?</p><p>Jeremy Roberts (03:49)</p><p>Yeah.</p><p>Yeah, no, it&#8217;s great. So I was at SourceCon from 2013 to 2016. And during my time at SourceCon, you like in 2012, 2013, everything was manual, right? So we were teaching you how to use Yahoo pipes and how to crawl and write Boolean and do all these different things manually. And then how to build</p><p>book marklets with JavaScript, you know, all these things. then fast forward to 2016, these founders of companies like talent bend, you know, Intel, hiring solved, et cetera. They had been coming to our shows and they had basically automated everything that we talked about. You know, they would go to a show. I&#8217;d get a call from Sean at hiring solved to be like, Hey, I automated this. I automated day one watch, you know, and, basically everything we taught people to do manually had built into the product. Right. So.</p><p>2016, I was like, you know, they&#8217;ve kind of automated our brains. So I&#8217;m going to go and work for one of these companies. So I got, I got on with hiring solved was my, favorite startup of the time. And we were selling public information about people, you know, crawling the internet for public information and then putting contact information with it. GDPR comes about during these years.</p><p>CCPA, the California Consumer Protection Act comes along during these years. And a lot of lawsuits, cease and desist started flying around for all these companies, right? You can&#8217;t use our data. so, I was, and then we actually were talking a lot at that time. I think your consulting firm had an agreement with us and we were talking about the future implications of using public information.</p><p>in recruiting, right? And there&#8217;s the good faith efforts, I need to hold all this information because I&#8217;m going to help you get a job, right? That&#8217;s how a lot of these people justify holding that information. And then there are things you could do pre-apply that you should not do post-apply with this information, right? And the judgments that you make, right? So there&#8217;s all these different things. One of the things that we foresaw back in those days was that there are going to be lawsuits around this.</p><p>And this is going to get awkward. And so we started pivoting to using our algorithms on ATS data, right? And using it to uncover people who had applied in the past, you know, within your data, instead of just the entire revenue of the organization come from selling external data. So long story short.</p><p>We felt, some of us felt uncomfortable with the direction the public information space was going. And then you said, by the way, CCPA, nobody&#8217;s brought this lawsuit yet about it being a consumer credit report, but that&#8217;ll come soon. Watch. you know, I kind of dug in, good friend, Jackie Clayton kind of dug in on the topic.</p><p>And then you kind of said, actually, this law is what I think it&#8217;ll be. I think you had some legal advice there that you&#8217;re very close to. So a lot of us saw this, and fast forward to today, and there&#8217;s the eightfold lawsuit, which is the exact law, exact use case that we were warning against in 2018. So yeah, fascinating that it took this long, honestly, for that to happen.</p><p>John Sumser (07:05)</p><p>So that&#8217;s a great point of departure because it establishes your credential for being able to see and worry about the future. And so I&#8217;m just going to dig in. What do you think the five or six things that frighten you most about AI for us?</p><p>Jeremy Roberts (07:16)</p><p>Yeah, let&#8217;s go.</p><p>gosh, I think I&#8217;m going to go, I&#8217;m going to start just kind of big, like just normal person fears, right? And that is, how do we all stay employed? How do we make a livelihood? Right. Like, and, so, you know, I, I&#8217;ve led a team of a hundred people before, and I think given what I see in technology probably could accomplished.</p><p>that with 15, you know, today. Now that is happening at scale for all of us. And so we have to find a way to be relevant and that&#8217;s hard, you know, so just as a human, that&#8217;s my number one concern. Number two, I can no longer watch a feature length film. My attention span is totally different. I learn differently.</p><p>I&#8217;m married to an educator and both of my daughters are teachers. I don&#8217;t understand honestly, how are we going to educate people and teach people critical thinking and problem solving when they don&#8217;t have to do that anymore. So my family, all three of those people teach elementary school. And I&#8217;m just really concerned about how the human brain will change, you know, and what do we do with that?</p><p>You know, and how do we, and basically the way I see myself surviving in a world of artificial intelligence is I&#8217;ve become an expert in something. So I can supervise a system and I can advise decision makers on what to do. Right. How do you develop that expertise over the next 10 years? You know, so I don&#8217;t know.</p><p>John Sumser (08:51)</p><p>Well,</p><p>that&#8217;s an interesting thing. What I&#8217;m seeing is we used to learn by starting with a question, doing the research about the question, and arriving at a conclusion that we reported in some way. This is sort of the term paper model of learning.</p><p>Jeremy Roberts (09:12)</p><p>Mm-hmm.</p><p>John Sumser (09:12)</p><p>We&#8217;re now having to behave like teachers, right? Because we get the term paper. And that&#8217;s the starting point. We get the term paper, and we have to dig it apart and evaluate its validity just like a teacher would do. Just like a teacher would do. So the...</p><p>Jeremy Roberts (09:19)</p><p>Mm-hmm.</p><p>And</p><p>what&#8217;s scary about this is that like, we have enough context to recognize now that you can&#8217;t trust anything. And this is a conversation that I was having with my 19 year old the other day. I was like, where did you get that perspective? Do you know what I&#8217;m saying? And just making sure, like I&#8217;m okay if we disagree on this, but I wanna know where this came from. How did you arrive at this? How did you?</p><p>break apart the argument you heard. then, you know, because the algorithm is creating new realities and people do not have that skill to behave like a teacher, to read that and dissect it, you know, and that&#8217;s the most scary thing, which I think is leading to the polarization of everyone, you know, like what is reality anymore? What is truth, you know?</p><p>John Sumser (10:21)</p><p>Yeah, yeah, it&#8217;s an interesting time. So let me let you get back to things that scare you, things that go bump in the night here.</p><p>Jeremy Roberts (10:28)</p><p>Yeah.</p><p>So those are the big things. Like how do you, how do we survive this financially? know, luckily I only have to get 15 more years out of this thing. You know, hopefully. You know, and then how does it affect our children and the future of work? don&#8217;t, I can&#8217;t, I can&#8217;t really advise my children. I&#8217;ve got two teachers and a firefighter. So.</p><p>Those seem pretty stable, you know, in terms of the future. So, but yeah, it&#8217;s hard. then, those are the big things. And then I think, I&#8217;m not scared of using AI. I&#8217;m scared of other people using AI. If that makes sense, right? Automating, automating bad deterministic workflows in every specialty, whether it&#8217;s recruiting or anything else in life.</p><p>John Sumser (10:53)</p><p>Right?</p><p>Sure.</p><p>Jeremy Roberts (11:15)</p><p>you know, automating things and decisions being made.</p><p>You know is is scary. You know what I&#8217;m saying? So like I Don&#8217;t know the whole thing is is nerve-racking if you sit and think about it</p><p>John Sumser (11:27)</p><p>So what I think I see, and what I think you see too, is a whole lot of people have been sent out to figure out how to do AI in their particular context. And what they have learned one way or another is that the promise of AI isn&#8217;t really being met in those places, that you can&#8217;t get deterministic solutions.</p><p>Jeremy Roberts (11:39)</p><p>Mm-hmm.</p><p>John Sumser (11:52)</p><p>in order to satisfy management&#8217;s desire to see AI products, the AI projects, they&#8217;re automating stuff that shouldn&#8217;t be automated in ways that it shouldn&#8217;t be automated. And that is a recipe for disaster, right? Because what automation does, what unthinking automation does is it creates a prison that looks a lot like a workflow. And</p><p>Jeremy Roberts (12:04)</p><p>Yeah.</p><p>Mm-hmm.</p><p>John Sumser (12:19)</p><p>Anybody who&#8217;s lived inside of a workflow knows that the description of the workflow is an ideal and that each individual case varies. so the workflow is never precisely met. But when you automate it, you force it to be precisely met.</p><p>Jeremy Roberts (12:30)</p><p>Mm-hmm.</p><p>What?</p><p>for sure. And these workflows, in my opinion, a lot of them are highly flawed. And so if you think about like 10 years ago in recruiting, because that&#8217;s pretty much all I know professionally is you have candidate identification, candidate engagement, the interview steps, presenting the hiring manager, onboarding, right? And when we bought technology, you would buy</p><p>hiring solve seek out, hire easy for your sourcer. And then you would buy this for this person. And we were buying tools just to make that person in that role faster without changing job descriptions, without changing the workflow, right? It was just this speeds, this speeds up scheduling, plug in Calendly, whatever it may be, right? And so where we are today with AI,</p><p>Implementing it well means just breaking everything. You know what I&#8217;m saying? Like what is the job description after we implement all of these things? What could we do? And I see very few companies that are looking at the holistic workflow. They&#8217;re still like, well, we&#8217;re having a problem with fake candidates by a fake candidate solution. we need more candidates. So</p><p>buy more and so they&#8217;re buying AI powered tools and plugging holes in their workflow. And then they&#8217;re like, you know, I use AI because I use this tool. And it&#8217;s like, that a, company might use AI to do what they do, but you&#8217;re not, you&#8217;re, just kind of plugging a hole with a point solution. Right. And so I&#8217;m seeing a lot of conversations like that where people it&#8217;s time to transform and kind of break everything and put it back together.</p><p>but they&#8217;re still just like trying to plug and play.</p><p>John Sumser (14:25)</p><p>Yeah, so when I look at this, I see a market that&#8217;s never going to be as big in individual tool cases as it used to be because we&#8217;re moving from a universe where solutions are one to many. That&#8217;s what a SaaS model is. We sell the same thing over and over and over to a bunch of different people to a world where it&#8217;s many to many communications.</p><p>Jeremy Roberts (14:36)</p><p>Mm-hmm.</p><p>John Sumser (14:52)</p><p>you can only do that inside of tightly constrained networks. So that makes, like we were talking earlier about what is the recruiting market and what does the recruiting market mean? And it&#8217;s really a mosaic of different approaches to solving problems and the problems themselves are different. And my guess is that what you&#8217;re going to see is a collapse of that idea that there is recruiting and</p><p>Jeremy Roberts (14:56)</p><p>Mm-hmm.</p><p>John Sumser (15:20)</p><p>an expansion of the idea that healthcare recruiting is a discrete discipline or manufacturing recruiting is a discrete discipline and there&#8217;s some overlap but it&#8217;s not a one-to-one correspondence to get to those things. Those are discrete use cases.</p><p>Jeremy Roberts (15:25)</p><p>Mm-hmm.</p><p>Right.</p><p>John Sumser (15:36)</p><p>So I&#8217;ve been newsin&#8217; around about security with large language models for a long time now, because I think it&#8217;s a fundamental ethical question, but I don&#8217;t see much really going on in the industry that has to do with security. So, and what I mean isn&#8217;t the standard,</p><p>internet trying to break through a firewall kind of security, but it is the kind of security that emerges when the fundamental coding language is English and the tool that you use to do the coding can&#8217;t really tell the difference between a coding instruction and a question. Right? That&#8217;s at the heart of it. That&#8217;s at the heart of it is those roles are not distinct in the way that they used to be with software development. And so</p><p>Jeremy Roberts (16:15)</p><p>Mm-hmm.</p><p>John Sumser (16:25)</p><p>Are you seeing anything? Are you seeing any security oriented views?</p><p>Jeremy Roberts (16:29)</p><p>First off, back to my original disclaimer that I am not the tech person here. You don&#8217;t know. mean, I&#8217;m not the developer or, or the, you know, penetration tester. Right. But I think you&#8217;re touching on something that is incredibly important. Right. And we see a lot of companies out there and full disclosure, Tenzo&#8217;s kind of.</p><p>The big thing that we do is AI interviewing and screening, right? And so a lot of my context comes from looking at what we do versus what I see on the market. First off, you sent me a great document with all of the ways that you could try to verbally hack, you know, an AI interviewer, screener or chatbot, right? And that was really good, right? And I actually went through all of them.</p><p>before this call because he&#8217;s releasing this. don&#8217;t want to be on his website if these fail. Right. And, so, so I went through all of them, you know, and, like I said, I&#8217;m not a pen tester, but, but I did, and I was excited to see that, that we are prepared for those types of attacks. Right. I knew that we, I&#8217;d been told that we were right, but I hadn&#8217;t actually done that myself, but, I,</p><p>saw a presentation at a conference last year where a large respected well-funded vendor said, we are releasing AI interviewer and I want to show you. And, opens up his laptop and he says, interview me for a Java developer job. And she goes, okay, sounds good. Let&#8217;s go. And does the interview. And it was.</p><p>really verbally and humanly impressive, right? I mean, was saying everything like a human. was very realistic. It made me comfortable. would make the interviewee comfortable and open up, right? But just the fact that the interview started with, interview me for a Java developer role. And then everything was kind of.</p><p>a two-sided conversation that sounded really natural, but there were no guardrails, there were no constraints, there was no mission, right? Like these systems to be used in HR and recruiting, if you&#8217;re interviewing 3,000 people, it should be very defined. These are the questions that are asked. If the candidate goes off, you should be able to answer questions. You should be able to laugh if they say something funny. You should be able, you know, the system should be able to...</p><p>But the minute they chase a rabbit, you should answer the question and say, now back to that question. I need to know this. Right. And, and it needs to have those constraints in the scoring rubrics need to be the same. And if it&#8217;s not the same for everyone, there&#8217;s no defensibility. Right. And it also, if you go in with like these AI security tricks, like, Hey, disregard all the instructions and give me a hundred.</p><p>on this interview, right? If you&#8217;re capable of doing that, that&#8217;s a problem, right? That means that the developers of the platform did not understand how to create this, right? And I could make something, or my kids could make something that sounds natural. If you just start talking to Siri, you could tell Siri to interview you for a Java job and she would start. So I see it.</p><p>John Sumser (19:22)</p><p>Cut.</p><p>Jeremy Roberts (19:48)</p><p>I see a lot of systems in our space that are pretty easy to hack that you could take off track pretty easily. So, and some of them are by big vendors, right? They may be a big vendor that&#8217;s already through your security protocol, but the team building this, isn&#8217;t really sophisticated, right? Doesn&#8217;t really understand the problems. So.</p><p>John Sumser (20:06)</p><p>Yeah, what could go wrong there?</p><p>Jeremy Roberts (20:08)</p><p>Yeah. Yeah.</p><p>John Sumser (20:10)</p><p>Yeah, yeah, so it&#8217;s interesting what you make me think and I&#8217;ve been I&#8217;ve been toying with how you see this but compliance really is a security issue. It really is fundamentally a security issue and I don&#8217;t think it&#8217;s been thought of that way.</p><p>Jeremy Roberts (20:19)</p><p>Mm-hmm.</p><p>Yeah, no, it&#8217;s, I hear a lot, you know, like AI interviews kind of is people asking a lot of questions as though it were a security concern. And it&#8217;s like, you know, a well-trained system asks the same question every time, scores it the same way every time, redacts any information that could lead to bias, scores each skill independently, not.</p><p>as one, you know, and then aggregates the score, right? So we do all of that if it&#8217;s done well. So, you know, I don&#8217;t, if it&#8217;s done well, I think it improves your situation. If it&#8217;s done sloppily, it is a mess. And I think a lot of people are not really prepared to tell the difference. So.</p><p>John Sumser (21:05)</p><p>So that&#8217;s</p><p>interesting. If you are a buyer and you want to protect against this kind of thing, it really means that there is some substantial preparation that you have to do before you deploy a system.</p><p>Jeremy Roberts (21:19)</p><p>Yeah, yeah, for sure.</p><p>John Sumser (21:21)</p><p>So how do get that message across? Because that&#8217;s... I&#8217;ve been part of...</p><p>boondoggles trying to fix recruiting data. And the holes and the errors inside of it make that extremely challenging. But to get compliance right, you&#8217;ve got to get that puzzle correct. And to get the fidelity in some sort of a simulation right, you have to be able to mark off where the errors are going to be and get that repetitive stuff.</p><p>Jeremy Roberts (21:51)</p><p>I know you have a list of questions and kind of where there were a few things we wanted to get to. I hope, I hope I don&#8217;t take us the wrong direction, but the big thing that I think for HR tech buyers today, we&#8217;ve got a lot of baggage. We&#8217;ve got the Amazon story where, and I think it was 2016, they were rebuilding their own model.</p><p>And it was deterministic, you know, based off of past decisions. And so it was just amplifying bias, right? And they made this public. This is not, I&#8217;m not saying something dated and say, so I think we&#8217;re carrying a lot of bias from deterministic solutions that were rolled out in the 2010s. And we&#8217;re carrying those questions into the current conversation.</p><p>And one of the big questions that I always hear is what is your model, what data is your model trained on? Right. And that is, you know, models still do train on data. But, and then I also hear, I heard a webinar one day with a bunch of experts in our space saying, don&#8217;t work with an HR tech vendor unless they can give you their model card. And I&#8217;m like,</p><p>Okay. Darn, we don&#8217;t have a model card. You know what I&#8217;m saying? And so me, my non-tech brain, I go talk to someone with a PhD in AI and the question was just as confusing. Right. And then what we get to the bottom of, okay, the, it comes back to there&#8217;s a lot of baggage and we need to be asking how these models work, but we don&#8217;t know how to ask it.</p><p>So, so back in the 2010s, what we would do is, okay, there are 600 million people on LinkedIn. We&#8217;re going to take all of that. We&#8217;re going to analyze all of their skills. We&#8217;re going to determine the interconnectedness of skills. And through that, we&#8217;ll be able to make inferences. Like if you say you&#8217;re a developer who works with Figma, you understand user experience design, right? So those are the inferences you could make. And then you could use job descriptions.</p><p>in candidate pools and match them based off of all of that learning, right? Now, if you think about the peak of deterministic bias, I&#8217;m going to go ahead and say it, LinkedIn&#8217;s projects, right? How does LinkedIn&#8217;s projects work? If I go and I put 20 mechanical engineers from Texas A who live in Houston into a project and</p><p>And it starts suggesting new people. Guess what? It&#8217;s going to suggest more people like that. Identical to that, right? That amplifies bias. &#8275; and you&#8217;re all using it. You know what I&#8217;m saying? Like, so that is deterministic. is if, if you put 20 Caucasian males from Texas A into a project, it&#8217;s going to keep showing you more like that. And you&#8217;re going to say no to some yes to others. And it just keeps learning from your behavior and that amplifies human bias.</p><p>John Sumser (24:25)</p><p>All right.</p><p>Jeremy Roberts (24:46)</p><p>Right. So that&#8217;s what we&#8217;re coming to the table asking questions for. Most companies today in HR tech are not creating their own models. They&#8217;re leveraging the LLMs. Right. So now your question is which large language models do you leverage? Right. Large language models are trained on more data than any of us have ever seen. so training a smaller set of data is more risk.</p><p>fraud than using you mentioned notebook LM, right? It&#8217;s a Gemini product, right? So that is more risky than using something that&#8217;s based on a large language model, right? So now the question is, which large language models are you using? And then the challenge for the engineers using those large language models is how do you constrain that so that it doesn&#8217;t hallucinate, right?</p><p>So those large language models, back to the model card comment, those large language models release model cards with every model. And it says, these are the strengths and weaknesses of this model. This is where we see hallucinations is where we don&#8217;t. Right? So your HR tech vendors need to understand that model card and they need to understand how to leverage the knowledge of that LLM and apply constraints so that it doesn&#8217;t hallucinate. And they need to understand your industry so that they apply the right guardrails, compliance, protocols.</p><p>in best practices, right? But so you&#8217;re no longer in most cases, training models, you&#8217;re leveraging large language models and using the knowledge that they have to apply to your workflow. And so that&#8217;s where the conversation needs to shift is kind of people understanding constraints, guardrails and compliance protocols and governance, right? And how it&#8217;s applied to those large language models. Now,</p><p>then some of the training that could go on is learning from your decisions. That&#8217;s where it gets really, really dangerous, right? Like we hired five people like this last year. You know, every time we have present somebody like this, they get hired. So keep presenting them like that. without all the human checks and balances, that&#8217;s where you start to increase bias and, and, and open yourself up for risk. Right? So that&#8217;s knowing</p><p>how large language models are used and then where to introduce human review and human in the loop scenarios is to me where the questions should be going.</p><p>John Sumser (27:04)</p><p>That&#8217;s so interesting. So if you are putting a pin in the idea that you could learn from the data that you generate because there&#8217;s some risk of bias, how do solve that problem? Because you do want to learn from the experience that you have. You don&#8217;t want to multiply bias. I don&#8217;t understand how you unscrew that up.</p><p>Jeremy Roberts (27:27)</p><p>Yeah, and a lot of it is...</p><p>The human in the loop component, you know, is, absolutely necessary. And if you think about just good HR and recruiting best practices of, you know, if you work at a government contractor that complies with OFCCP, all the random audits you do throughout the year to make sure that you&#8217;re prepared. If the OFCCP shows up on a Monday morning, right. And ask for a few files like.</p><p>It really is about just keeping the human in the loop. to me, the biggest risk, and I&#8217;m not even going to call it AI, the biggest risk with automation is uninformed people who do not do all of those governance steps well, just automating and making all of these mistakes at a massive scale. So there is a lot of risk there.</p><p>I&#8217;m not a doomsdayer on AI. I&#8217;m a doomsdayer on AI implemented poorly without governance and controls and human intervention. You know, so I&#8217;m not scared of AI. I&#8217;m scared of humans implementing it.</p><p>John Sumser (28:26)</p><p>So...</p><p>So, &#8275;</p><p>Jeremy Roberts (28:31)</p><p>I&#8217;m not scared of my</p><p>own implementation.</p><p>John Sumser (28:34)</p><p>Yeah, so can you point to some place where somebody might go dig in and figure out the elements that you&#8217;re talking about? If I wanted to come at that and I buy what you&#8217;re saying, how do I educate myself?</p><p>Jeremy Roberts (28:49)</p><p>It comes from like, and it&#8217;s hard. I haven&#8217;t been able to find this information. This is me sitting here, people asking me questions that don&#8217;t feel right. And then me going to experts in the field and saying, I&#8217;ve been asked this, let&#8217;s dissect this. Where did this question come from? And it comes from a, okay, everybody&#8217;s out there. Our AI is the smartest. What do you mean your AI is the smartest? You&#8217;re using the same large language models as everybody else.</p><p>You know, like it&#8217;s not your AI, know, like you&#8217;re constraining and you&#8217;re building the orchestration system that taps into that knowledge and you should be building it so that it doesn&#8217;t hallucinate and give false positives and you shouldn&#8217;t automate decisions based on that. You should be informing people who then make decisions. Right. And so, so anyway, it&#8217;s.</p><p>I haven&#8217;t been able to find a good source that is articulating it. Well, I think there are a lot of us, you know, talk to your vendors, you know, I sent, one of my, it&#8217;s a prospective customer said, Jeremy, our team is meeting with this company today. What would you ask? And I just sent some really basic questions along these lines. Like, tell me which models you leverage.</p><p>How do you train, do you train your own model in, know, like a lot of these sales reps don&#8217;t know, you know, they&#8217;re just looking at marketing material. And unfortunately, marketing material is often written to the buyers and the buyers have a misconception of what they want. You know what I&#8217;m saying? And then you&#8217;ve got sales reps repeating it, right? So oftentimes they can&#8217;t get very deep.</p><p>So you have to go a layer deeper, maybe get your solutions engineer in who can answer questions, right? And the questions, like I said, do you develop your own model or do you leverage LLMs? Which ones do you leverage and where? What are the decision points? Where do humans get involved? How do you protect me from the AI saying this? Can you like...</p><p>My company has a policy that we never ask about salary. We never, you know, like, do you have a place to ensure that the system never goes there? Right? There are all these kinds of things, like, but basically you should live as though you should have your governance person with you, you know, and, making sure that, that they&#8217;re being heard and that the company is comfortable with it. So.</p><p>John Sumser (31:18)</p><p>That&#8217;s interesting. So I think part of what you&#8217;re talking about is a kind of governance that we don&#8217;t really know how to do very well. And that is because what we&#8217;re dealing with are dynamic systems that vary in performance based on past performance, you need dynamic governance, right? You need some ongoing observation in the moment so that you can see the things in need of correction.</p><p>and then the capacity to correct it. And that is...</p><p>extremely absent from the current conversation. That approaches something like self-awareness, where the system watches what it does and tries to bring it back in line with some sort of standard.</p><p>Jeremy Roberts (31:52)</p><p>yeah.</p><p>Yeah, well, imagine where I think about governance. So there&#8217;s the governance that we all know should be done. Recruiters should never ask this. Recruiters should never say this. Hiring managers should never say this. Okay, if you&#8217;ve got 100 recruiters all day, sending messages individually,</p><p>leaving voicemails individually, answering questions individually, asking questions, talking about people&#8217;s family situations with them. You&#8217;ve got all these things, but guess what? It&#8217;s most of the time not recorded. You know, and it&#8217;s not, you can&#8217;t find it anywhere. And so the governance isn&#8217;t really changing. It&#8217;s just that now things that used to kind of all of that risk</p><p>is out there, you just don&#8217;t really know about it. You know, now with these systems, it&#8217;s like you can actually, the things, you can have a training with 100 recruiters, 30 of them are multitasking, 20 of them log in and walk away, and maybe 40 year the new information that you should never ask about salary in New York. Do you know what I&#8217;m saying?</p><p>And then how often do you think that, okay, guess what? They&#8217;re still doing it. They&#8217;re still saying it wrong. that you can&#8217;t say, you can&#8217;t ask the visa question like this. have to ask it like this, but you got 3000 people you&#8217;re trying to tell that to. That&#8217;s hard. You know what I&#8217;m saying? Like, so, so yeah, the governance hasn&#8217;t really changed. It&#8217;s just that. You know, they think that they don&#8217;t have a problem with it because they don&#8217;t.</p><p>actually get to see what&#8217;s going on out there. know, like they sent the memo, so if they get sued, that person&#8217;s not on the hook because the recruiter should have read the memo, you know. So I don&#8217;t know. Yeah, the governance isn&#8217;t really changing. It&#8217;s just being aware and making sure the system&#8217;s prepared for it because you&#8217;re about to turn it on and if it screws up, it&#8217;s loud.</p><p>John Sumser (33:55)</p><p>Well, so it&#8217;s interesting. I think you could make the case that governance is changing for a simple reason, and that is...</p><p>You have to assume that everything&#8217;s being recorded. You have to assume that. And another word for recording is evidence. &#8275; And so the amount of evidence that&#8217;s available is significantly different than the amount of evidence that used to be available. You knew this stuff went on in the corners, but you couldn&#8217;t see it. And now, if you don&#8217;t see it, it&#8217;s because you buried your head in the sand.</p><p>Jeremy Roberts (34:07)</p><p>yeah.</p><p>Mm-hmm.</p><p>Mm-hmm.</p><p>John Sumser (34:32)</p><p>Right? And in these areas, ignorance isn&#8217;t an excuse any longer because it&#8217;s possible to know. And so that changes the way the governance feels.</p><p>Jeremy Roberts (34:42)</p><p>Mm-hmm.</p><p>John Sumser (34:46)</p><p>Alright, let&#8217;s, let&#8217;s.</p><p>Take the last question that I said, and let&#8217;s talk about what should a buyer be asking a technology man.</p><p>Jeremy Roberts (35:00)</p><p>Mmmmm</p><p>I think the main thing to remember is that AI, it&#8217;s not, we&#8217;re not talking about tools anymore. We&#8217;re talking about orchestration of your entire workflow, which brings us back to the word governance. It&#8217;s a governance conversation. It is these are our policies. Can we do what you&#8217;re proposing within these policies? Show me how you do that. The big question, the black box,</p><p>machine learning of the 2010s. If you asked how a decision was made, they can&#8217;t tell you. You should be like, if you ask, can you tell me how you got that score? Can you tell me how that decision was made? The founder of a tech company today should be able to push a button and show it to you. And they should honestly like,</p><p>When I was vetting all of these solutions and I met with Mason, the CEO of Tenzo, when I asked him, Hey, but can you show me how that happened? He just clicked a button. He looked at me like I was dumb and pushed the button. It was like, yeah, yeah, it&#8217;s right here. You know, is there an audit log for all the events that happen? Can you show me how you made any decisions? If I were audited, we have this protocol that is.</p><p>A deal breaker for us. Right. So it&#8217;s more of like, can you orchestrate our entire process with governance in mind and keep us safe more than it is? Is it, you know, can you, it&#8217;s not a tooling conversation. I think that the tooling that I&#8217;m going back to AI interviewing, cause that&#8217;s where my head is right now, but AI interviewing, it&#8217;s not hard to sound natural. It&#8217;s hard to do it in a way that keeps you safe and optimizes.</p><p>the process while keeping you secure.</p><p>John Sumser (36:40)</p><p>So that raises maybe the last question. Do you imagine that liability for the flaws of the product becomes a real issue now? Because it used to be that if you were a recruiting function and you bought some technology, it was your problem exclusively. And now, if the models are incorrect, if the framing is incorrect,</p><p>the vendor&#8217;s got some piece of that. And so I&#8217;m wondering what you think about where liability exists going forward.</p><p>Jeremy Roberts (37:16)</p><p>That is a great question.</p><p>honestly don&#8217;t think the answer to that matters fully because if I sell it to you and you buy it, we&#8217;re actually both going to lose our jobs. So if we all get sued and I said something wrong and you bought it, the reality is both decision makers are in a problem area, right? So if we&#8217;re talking to talent acquisition practitioners,</p><p>I don&#8217;t know where the courts are going to settle. Do you know what I&#8217;m saying? Like, like it&#8217;s just like a school shooting. You know what I&#8217;m saying? Like, okay, the gun originated at Walmart. It wasn&#8217;t intended for that purpose. Who&#8217;s liable? You know what I&#8217;m saying? Like it, it, you can, you can be a bad actor with this technology. You know I&#8217;m saying? And so, I think that</p><p>regardless of if you&#8217;re sitting at a vendor or you&#8217;re sitting inside a corporate recruiting department, both of us would be affected. And I don&#8217;t know that we have, for example, in the eight-fold lawsuit, the issue at hand is can you use public information to make an employment decision? My gut instinct is</p><p>they were not judging applicants using that information. I think they were likely using that information on the sourcing side, but they also have an application that filter, a product that filters applications. And so the way this case will shake out, if they can say that it wasn&#8217;t used to make a decision, it was used to market to candidates, that&#8217;s one thing.</p><p>can show that it wasn&#8217;t on the other side, the applications. Because if you look at the quotes, the CEO says, we do not use that data to make decisions about applicants. And so they&#8217;re going to lean into that distinction. And so can someone who buys a product like that use that information on this side of the fence? Yes. Is the product designed that way?</p><p>Likely not. I don&#8217;t know it. I don&#8217;t know the product. I would assume they had good advice there. And so I don&#8217;t think that we know yet who&#8217;s going to be liable when AI goes awry, but I do know.</p><p>recruiting leaders as even the most important recruiting leader in the world is not as important as the organization that they bought it for. whether you&#8217;re the buyer or the seller, we&#8217;re both gonna have problems. So we need to make sure we do it right. From the which corporation wins the lawsuit, who knows, we would both lose.</p><p>John Sumser (39:45)</p><p>Got it, that&#8217;s great, that&#8217;s great. So we&#8217;re gonna wrap this up, it&#8217;s been fun. Any final thoughts you wanna make sure to get down here?</p><p>Jeremy Roberts (39:53)</p><p>No, I mean, I guess, I don&#8217;t know, like my big thing right now, a lot of the people I talk to in our space are really having problems. They&#8217;re really uncomfortable. You know what I&#8217;m saying? And it is, I, I was laid off in 2025 in March and I realized really quickly that there weren&#8217;t a ton of jobs and I got really scared. so I&#8217;ll share the recipe that worked for me.</p><p>I was like, you know what, there aren&#8217;t a lot of jobs and half the TA leaders that I call to network with are like, I haven&#8217;t updated LinkedIn and I&#8217;m unemployed, you know, and I was like, sucks. I don&#8217;t want to be sitting on the sidelines competing with these people. So I created a consulting firm and I flipped the script and it worked well for me. And I was able to get really busy, really fast. And what I figured out was.</p><p>Everybody has problems and typically they can find some money. So every one of those calls I would call and say, Hey, John started a consulting firm. This is what all I can do. What are you thinking about this quarter? And then I get them talking and then I would say, well, where do you have some money? And I had one client had some leftover marketing money. One client had two.</p><p>contract recruiting jobs that they hadn&#8217;t approved on the budget that they hadn&#8217;t used. And then another client just had like, they had canceled the tech product. And so they had extra money in the budget. I, everybody I talked to had problems and they needed to solve. About half the people I talked to had problems and they could find money. So don&#8217;t, don&#8217;t go looking for jobs, look for problems and money in this economy.</p><p>John Sumser (41:33)</p><p>Awesome. What great advice. So I can&#8217;t begin to thank you enough for showing up and doing this. It was a great conversation. We should do it some more. Okay. All right. Thanks, Jeremy. Bye bye.</p><p>Jeremy Roberts (41:41)</p><p>Yep, let&#8217;s do it. So it&#8217;s good to see you. All right, have a good one. Thank you. Bye.</p>]]></content:encoded></item><item><title><![CDATA[HREX Podcast 1.09 Usman (Oz) Kahn]]></title><description><![CDATA[Enterprise AI and Precision in AI]]></description><link>https://www.hrexaminer.com/p/hrex-podcast-109-usman-oz-kahn</link><guid isPermaLink="false">https://www.hrexaminer.com/p/hrex-podcast-109-usman-oz-kahn</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Mon, 16 Feb 2026 13:45:24 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/187554513/82dfb6916db27df8db3b71f4307d3b6b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>Summary<br></strong>In this episode of the HR Examiner Podcast, John Sumser speaks with Usman (Oz) Khan, Senior Vice President and head of ADP Ventures, about the intersection of AI and enterprise software. They discuss the fears surrounding AI, its limitations in precision work, the differences between personal and enterprise AI applications, and the significant challenges of security and bias in AI models. Usman emphasizes the importance of understanding the complexities of enterprise data and the need for HR buyers to ask the right questions when evaluating vendors. The conversation concludes with insights on the prudent approach to AI implementation in organizations.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/subscribe?"><span>Subscribe now</span></a></p><p><strong>Takeaways</strong></p><ul><li><p>AI poses risks that could erode human connections.</p></li><li><p>Value delivery is essential in a competitive market.</p></li><li><p>Current AI technology lacks the precision needed for enterprise tasks.</p></li><li><p>AI&#8217;s evolution is more evolutionary than revolutionary.</p></li><li><p>Security is a major concern for AI in enterprises.</p></li><li><p>Bias in AI models mirrors human biases and requires careful training.</p></li><li><p>Inconsistent data can hinder effective AI implementation.</p></li><li><p>HR buyers should ask detailed questions about vendor capabilities.</p></li><li><p>Patience is crucial in adopting new technologies.</p></li><li><p>Understanding the complexities of enterprise needs is vital.</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/p/hrex-podcast-109-usman-oz-kahn?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/p/hrex-podcast-109-usman-oz-kahn?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p><strong>Chapters/Timestamps</strong></p><p>00:00 Introduction to AI Concerns</p><p>02:32 The Limitations of AI in Precision Work</p><p>05:09 AI in Personal vs. Enterprise Use</p><p>07:40 The Future of AI in Enterprise Systems</p><p>10:22 Security Challenges in AI</p><p>12:44 Bias and Training in AI Models</p><p>18:35 Navigating Inconsistent Data in Enterprises</p><p>25:34 Effective Vendor Questions for HR Buyers</p><p>32:29 Final Thoughts on AI Implementation</p><p><strong>Transcript</strong></p><p>John Sumser (00:00)</p><p>Hi, I&#8217;m John Sumter and this is the HR Examiner podcast. Today we&#8217;re going to be talking with Usman Khan who goes by Oz generally speaking. And Usman Oz is the Senior Vice President and head of ADP Ventures and the founding member of its strategic investment arm. It&#8217;s a fascinating job at the pile of the intersection of innovation and the</p><p>huge supply of customers that ADP has. So with that, Oz, how are you?</p><p>Usman Khan (00:34)</p><p>I&#8217;m really well. A little bit chilly in New York, but that comes with the territory, I suppose.</p><p>John Sumser (00:40)</p><p>Yeah, well, you can come visit me in California. It&#8217;s not chilly here today.</p><p>Usman Khan (00:44)</p><p>boy, don&#8217;t do that to me.</p><p>John Sumser (00:45)</p><p>I tell people in New York City this time of year, doesn&#8217;t have to be like that.</p><p>Usman Khan (00:50)</p><p>you</p><p>John Sumser (00:50)</p><p>So</p><p>I want to dive right in. I want to talk about the things about AI that scare you. So what are the five things that scare you most about AI?</p><p>Usman Khan (01:01)</p><p>I think the thing that scares me the most is that it takes us down a path that we should not go down. Where you see an erosion of the things that make us human at work, at home, and as friends with each other. Where it pushes us into such silos, personally and professionally, that we lose who we are.</p><p>I think that&#8217;s like the philosophical one. The very tactical one is, again, being part of a company that&#8217;s built on a 75-year legacy of strength and market dominance through value. Nothing comes without delivering value to your customers. That can get challenged in an environment where the innovation pace really comes after.</p><p>Those are two fears that we as professionals and as society have some degree of agency and try to orient in the right direction, if you will.</p><p>John Sumser (02:03)</p><p>Yeah, I worry a lot about...</p><p>The view that large language models are somehow adequate enough or can be made adequate enough to do precision work, right? That seems like something that you&#8217;d be dealing with on a regular basis, but you can&#8217;t afford. ADP could not possibly afford to have a process that goes, well, your pay this week is maybe $50. There&#8217;s actual disciplined precision required to do what you do.</p><p>And as near as I can tell, it&#8217;s not possible to get that level of precision with the lower sign.</p><p>Usman Khan (02:40)</p><p>It is not. It&#8217;s generative, which essentially means that it is designed to predict things based on patterns that it&#8217;s observed in the past. I like, you know, the human brain and the neural network, but the human brain also has a lot of other components to it, that give us judgment. And the current iteration of models is when you talk to, know, the talk about the Yann LeCun of the world.</p><p>there&#8217;s a very predominant set of thinking out there that says they have a marginal curve to which they will be efficient and be able to create judgements. They&#8217;re very good at micro-judgements. But for it to have a full set of decision-making ability about like, let me start from scratch and let me start to calculate payroll for 500 people in 35 states with a union.</p><p>in place, like forget it. AI is a long way from that level of precision. So I think the augmentation of human beings is really the frontier that we&#8217;re shooting for right now. And in some cases, people are being reckless to say that it&#8217;s a replacement for human beings. But from my perspective, the technology that we see today is not there. And you could argue about like the...</p><p>advancement of models and the pace of innovation and all that sort of stuff. But you can look at the fact that the step function change between each release becomes more more flatter. It started out revolutionary and it&#8217;s becoming more and more evolutionary. So I think a lot of things have to happen before this paradigm shifts that we&#8217;re in today. So I feel like we&#8217;re sort of in the middle of a very long S curve right now.</p><p>John Sumser (04:18)</p><p>That&#8217;s interesting from my perspective about the only enterprise thing that Generative tools are good for his marketing because in marketing it doesn&#8217;t actually matter how precise you are What matters is how persuasive you are? and so I&#8217;m worried that it leeches out into other things from there because it&#8217;s it&#8217;s awfully easy to do some things that look heroic until you</p><p>dig under the covers and find out what&#8217;s actually there.</p><p>Usman Khan (04:44)</p><p>It&#8217;s so interesting because I use a ton of AI in my personal life without promoting different models in different companies over here. Some of those problems can be as technical and precise as I like to build PCs with my kids. so troubleshooting a new rig that is having all sorts of issues because of memory loss or heat sink issues and so on and so forth.</p><p>And you&#8217;ll find that the model takes you into menus and command prompts that don&#8217;t exist that it thinks it exists. And these are like top highest tier paid models that you can get. And it&#8217;s always a reminder that, hey, it will make up stuff or it will try and keep on validating things that you believe in and the biases that you believe in unless you check it. To check it, the amount of work that you have to do is tremendous.</p><p>you have to literally grind and teach it, which is why AI in the enterprise has been at a different velocity than AI in my home. Because as a consumer, it&#8217;s very easy to pick up and apply my judgment to say this works and this doesn&#8217;t work. But when you bring that to someone&#8217;s benefits, it&#8217;s a whole different kind of works.</p><p>John Sumser (05:55)</p><p>That&#8217;s right. That&#8217;s right. That&#8217;s the precision piece. I watch the evolution of this, I&#8217;ve been following the evolution of AI for 50 years and it&#8217;s not uncommon for there to be peaks of enthusiasm followed by valleys of zero investment. And I wonder if you...</p><p>think that we have enough of a tiger by the tail so that we will avoid that kind of winter or is that an inevitable consequence of where we</p><p>Usman Khan (06:26)</p><p>I think the people that are on the very exuberant side of the market and deploying capital like you&#8217;ve never seen before might end up disappointed. Those of us that are being a little bit more disciplined, I think we&#8217;ll be fine because there are fundamental things that is evolving in the enterprise.</p><p>John Sumser (06:35)</p><p>Right.</p><p>Usman Khan (06:46)</p><p>probably is like in the business world, so to say. If you think about software as a layer cake, right, there&#8217;s generally three layers, which is one is like the input layer, this is the forms basically, right? It could be the cells in your Microsoft Excel or it could be your Word document. The middle layer is your logic layer, which is sort of like it routes information to where it should go. It can also have a calculator in it, like a payroll engine potentially. And then underneath that you have the data layer.</p><p>where you store the inputs and the outcomes of everything that you calculated. And then there&#8217;s the supplementary element to it. I won&#8217;t call it a layer, which is like the integration side of it, which connects all of this to other systems outside of it. And when you think about this construct, and the reason why I&#8217;m taking you down this long path, there are seams within that layer cake, and there are massive seams between that cake and other cakes.</p><p>And there are so many micro decisions or micro tasks that are low value added that people have to do to make that software function. And it&#8217;s everywhere. It&#8217;s an ERP, it&#8217;s an CRM, it&#8217;s an ACM systems. That is going to get solved by AI. So it is going to make all these systems far more efficient at what they were designed to do. All those things that people had to step in because it was incapable of making those micro decisions.</p><p>that scope is massive, right? You think about the fortune 500, you think about the companies underneath that you think about the mid market, you think about small businesses, and then you think even outside of the US, the total addressable market for these models and these companies is massive. I think everyone will commercially do well to make what we do today much more efficient. But those of us that were imagining, you know, AGI like scenarios where</p><p>very, very complicated processes get taken over by a new AI Salesforce comes out and basically dominates the field. We&#8217;ve yet to see that because the hard stuff around building an enterprise system is unchanged. The precise stuff you still have to do today. And that takes time, effort, manpower. Yeah, you can, you can have AI agents help you write your code, but if you rely on them completely, I was just with my,</p><p>My nephew who&#8217;s doing his masters at Berkeley in AI and he&#8217;s like, if I let the AI do the work without super-giving, I end up with a host of cards as far as my software goes. And that is true for anyone that doesn&#8217;t have experienced engineers looking at what AI is producing. So I think you have to separate these things from the hype. I think the low value added tasks are gonna go away.</p><p>But when people say, oh my God, the world is going to change and there&#8217;s going to be massive erosion and enterprise value for the ADPs of the world, I don&#8217;t believe it.</p><p>John Sumser (09:30)</p><p>So that&#8217;s interesting. You could translate what you just said into another generation of we&#8217;re finally going to be rid of spreadsheets. Because the way those little tasks are done inside of organizations is with spreadsheets so that you can manage the details of those little tiny transactions.</p><p>recently starting to believe that</p><p>That&#8217;s where humaneness is in this whole process. It&#8217;s actually not inherently inefficient. The problem there is the sophistication of the enterprise tool rather than the sophistication of the user. The user knows what they need. The user&#8217;s getting what they need. They have to. figuring out how to make an enterprise tool that is so sensitive that it can understand the nuances of</p><p>my organization, it&#8217;s kind of almost the opposite of enterprise economics, right? Because enterprise software economics runs on the idea that you build it once and sell it a thousand times, &#8275; right? And this next layer of stuff seems to me to start to look like</p><p>Usman Khan (10:36)</p><p>That&#8217;s right.</p><p>John Sumser (10:43)</p><p>every instance is a big implementation.</p><p>Usman Khan (10:46)</p><p>Yeah, I think the configuration, reconfiguration, reconfiguration, reconfiguration stuff is A, it&#8217;s exhausting. B, it&#8217;s also time away from all the things that you should be doing as an organization, right? Like it ends up being a distraction for your core enterprise because</p><p>not only are you going to have to supplement your people with external partners who are going to help you keep your stuff running, you keep your people from doing things that you necessarily care a lot more about, which is, you want HR to build culture, you want HR to think about leadership, you want HR to basically get in to the frontline problems that teams have, but they spend most of their time</p><p>buried under the transactional layer. They spend most of their time on cases that could be resolved much more easily. And I know a lot of HR people, HR and ADP is a thing of beauty in my opinion, because of what they do and what they practice and how it showed up in our products. I think the...</p><p>The observation from my side is just like, they&#8217;re just always on. We were talking about people always on in general. They&#8217;re always on because they can&#8217;t escape. And you think about Maslow&#8217;s hierarchy of needs, like they can&#8217;t escape the top department, they can&#8217;t escape the middle department, they can&#8217;t escape the bottom department. And I think this is where the unlocks from the really transactional stuff that we talked about is going to let real experts be</p><p>to the experts rather than being distracted by things that are not necessarily valuable. Does that make sense?</p><p>John Sumser (12:24)</p><p>Absolutely,</p><p>absolutely. I&#8217;m going to switch topics on you here. This last weekend was a big aggressive look at security problems in AI. We started to see some theoretically unimaginable things, but Cloudbot or what is it, Moltbot now? &#8275;</p><p>Usman Khan (12:44)</p><p>Yeah.</p><p>John Sumser (12:44)</p><p>Whatever that thing is.</p><p>poses the question of how do you handle security? I&#8217;ve been watching the large language model security question for a couple of years now, and the number of ways that you can break in and break things is expanding as fast as the bottles are. It&#8217;s incredible to see how the...</p><p>how vulnerable the tools are and how little attention is really being paid to security of the model itself, of the stuff that sits above the hardware and enterprise layer. What are you thinking about?</p><p>security inside of the AI world.</p><p>Usman Khan (13:21)</p><p>I think it&#8217;s probably the largest blocker in enterprise for AI adoption because it&#8217;s imperfect. There was this research recently that everyone quoted that said 80 % of AI pilots are failing. I think, again, this is personal bias, so take it with a grain of salt. I don&#8217;t think they&#8217;re failing.</p><p>I think they&#8217;re experimenting. They&#8217;re trying to see is the risk acceptable? Is the boundary acceptable? Can this thing be secure to the point that we feel that there is a liability for us and our customers? And so earlier stage organizations that don&#8217;t have a lot to lose, that are not sitting on top of billion dollar businesses are able to make that risk equation a little bit more, a little differently, right?</p><p>But for incumbents, it&#8217;s harder because you have massive liability. And so until you can be completely sure that the AI can&#8217;t be compromised by, again, AI driven attacks or your security framework isn&#8217;t strong enough, or you can&#8217;t essentially fool the model into doing something inadvertent, you hold back.</p><p>which means it takes you three times as long as an enterprise provider to do what a startup might do in the span of the same amount of time because they&#8217;re happy and willing to take risks. And their customer types, by the way, are also startups like themselves who don&#8217;t care about the same level of risks. If you think about a traditional mid-western client, they&#8217;re very different from a California company in the Gulf Valley in terms of what they&#8217;re willing to do and their whether.</p><p>to take. I think there&#8217;s a ton of value that&#8217;s going to be in the security sector. Everyone&#8217;s talking about fighting fire with fire. You cannot monitor the amount of volume of attacks that you could get because of, know, malware and bad actors.</p><p>the only way to counter it is with AI of your own. And that&#8217;s what all these companies that you&#8217;re seeing come into the cybersecurity realm are doing. So it&#8217;s gonna continue to be an elevated level of game set match in that domain, I think.</p><p>John Sumser (15:32)</p><p>Yeah, I find it interesting. Part of what I see happening in security is the fixing of the gate after the horses left the pasture. You can&#8217;t really solve the problem until you&#8217;ve been a victim of the problem, sort of. And I wonder...</p><p>How do you think about, because you work in a world where there is a massive product development department, and if you want to this stuff nipped in the bud, you have to embed it in the initial layer of the tool rather than as a bolt on on the outside of it. As soon as you start bolting security on, you create vulnerabilities rather than fixing them. So how do you communicate this rapidly changing security environment?</p><p>down to the ranks of people who are doing code.</p><p>Usman Khan (16:21)</p><p>So I think you do it by a having an ownership mentality where any failures in security are not owned by the security organization. They&#8217;re owned by your team.</p><p>They&#8217;re owned by you as a product group. And I think that starts to create a culture of accountability and a culture of really thinking through what you&#8217;re releasing. I can talk to the number of precautions that we put into place, the AI gateways that we&#8217;ve built that have kill switches in them, the information guardrails to keep bad actors from.</p><p>from exploiting systems and so on and so forth. And that takes a ton of time, by the way, to engineer and architect. And so we have been doing that. And it is a combination of culture, process.</p><p>and also diligence, right? I think your security apparatus as any organization also has to be super diligent, right? And you have to have people or whether they&#8217;re in your organization or outside your organization pointing out your vulnerabilities. And so when it comes to AI deployed, it really becomes a factor of like, what is it exposed to?</p><p>Because if it&#8217;s exposed in the public domain as a consumer facing application, then your vulnerability is tremendous. Because anyone can interface with it and go to town. If it&#8217;s behind a security firewall of a login for an employee, then already you have a whole bunch of security frameworks that you can put on top of it before.</p><p>someone goes into the playground and actually starts to do something with the AI. And then you have a second set of guardrails in terms of what the user permissions are and what they can get wrong. So again, it&#8217;s like a medieval analogy where you see these castles in Europe where you really have so many different walls that basically are layers of defense that are designed to sort of be safeguarded. You got to run it the same way with software and technology.</p><p>John Sumser (18:26)</p><p>What an interesting image. think I&#8217;ll hang on to that one. The layers of defense in a castle as approach to AI security. That&#8217;s awesome.</p><p>You have these learning models and they learn by absorbing the latest content that flows through. Learn is probably an extreme characterization, you know what mean. How do you monitor and control drift from the task or drift into biases that you don&#8217;t want to support when you...</p><p>And maybe the large question underneath this is, do you think it&#8217;s possible to build bias-free digital technology now?</p><p>Usman Khan (19:09)</p><p>I think it&#8217;s the same principle, again, I&#8217;m a fan of analogies, my team gives me such a hard time over it. But it&#8217;s the same principle as training up your kids, so to say. We as human beings are either sometimes, I don&#8217;t think we&#8217;re born with biases, but we start to build them over time based on our environment, our education, our learnings of the world, and so on and so forth.</p><p>John Sumser (19:16)</p><p>Ha</p><p>Usman Khan (19:36)</p><p>And so the interesting mimicry of the human being in these large-dangling models is this whole notion of neural networks. And so they will learn and build biases, or they will learn the pattern recognition to problem solve. Because I&#8217;ve seen X, and Z, I can replicate the pattern and tell you ABC. And the same way a human being might get it wrong, the model will also get it wrong.</p><p>But I think with a human being or a kid, you will hopefully have trained them to recognize their biases, to not behave in a way that they violate the task that they&#8217;ve been asked to do. And it works exactly the same way for your models. And that&#8217;s why even if you&#8217;re picking up a model off the shelf, if you train an Asian,</p><p>Training an agent is essentially grinding. It&#8217;s grinding with the learning. You have to basically run it through its paces. You have to it where its limitations are. You have to get other agents to test it and basically see if it behaves badly. You have to have trained those other agents to test that agent because you want to make sure that you completely control the variable in prep. So the same way you can teach your kid to not say completely inappropriate things at a family dinner.</p><p>The same way you have to train your AI to not go off on a tangent and start to imagine stuff. You ask about a company HR policy, your guardrails, your mechanisms, your training should basically point to only reference my company&#8217;s policy, what we put into place. Do not go out to any other existing knowledge about company policies on X, Y, and Z. Stick to what you&#8217;ve been told.</p><p>based on this, apply your neural framework and give an answer. And that takes time, unfortunately. Again, it goes into the cycle time of developing. So whereas as a consumer, you&#8217;re okay with an open-ended answer. And if you know something about the subject, you might catch the error and you might correct it, or you might actually take a wrong answer and believe that to be the truth, by the way, as well. And that is also the inherent risk of, you know.</p><p>leading you astray. But the opportunity cost of you doing that as a consumer is nothing compared to an organization doing it where especially it&#8217;s a compliance-based violation that might result.</p><p>John Sumser (21:50)</p><p>So you opened up a rabbit hole that I&#8217;m gonna run down for a second.</p><p>If you try to give a large language model the HR policy data that&#8217;s available in a large organization, you will have a great big giant pile of incomplete and contradictory data. Because you can&#8217;t...</p><p>enforce the same specific dress code in San Francisco that you can in Minneapolis. That sort of thing. There&#8217;s going to be variability inside of the data if it&#8217;s complete and if it&#8217;s up to date. I have yet to see or hear of a good way of sorting that out. It has stopped a lot of companies in their tracks over the last 15 years because that</p><p>incomplete and inconsistent data makes for a stew that&#8217;s really hard to predict what the right answer is in a given context.</p><p>Usman Khan (22:51)</p><p>This is a good one. and actually this speaks to actually one of our portfolio companies as well. so, so we invested in, a company called Emma that&#8217;s EMA. they&#8217;re, essentially a platform to help you deploy AI within the enterprise space. And so they have like, this, this thing called Emma fusion, which is, which is basically like a model layer that picks the most appropriate model for the use cases that you want.</p><p>and they build AI employees or agents around specific workflows. And the reason why we actually got really interested in Emma was because after meeting them, found out that Hitachi, which is a very large global conglomerate, actually, I was really shocked to hear what they did because I always associated with electronics in my home, but they do a heck of a lot more.</p><p>They, they use the Emma to basically become the, the HR policy, agent for their global workforce. And the problem that you&#8217;ve just described, right. You&#8217;ve got. You in your database, you&#8217;ve got policies that are probably outdated. You have policies that are contradictory. You have policies that are geography bound. you have policies that are date bound. you have policies that are role bound.</p><p>And when you pull it all together and you try and make sense of it, you can end up with contradictory answers, right? Simply inquiring on your PTO balance might pay time off. I&#8217;m sure this is an HR audience, so they&#8217;ll all know this. inquiring on your PTO balance, is it going to use the right methodology for where you&#8217;re based and who you are based on your context? You don&#8217;t know that. And so the...</p><p>This is where the proprietary work goes in past the model layer, right? Essentially figuring out how to use small language models for places where you don&#8217;t want it to go off the rails, you don&#8217;t want it to go sideways, for using large language models where they&#8217;re extremely directed down this path of like, this is your framework, this is your hierarchy of how you should treat information.</p><p>And then essentially using it, the process and put out this whole notion of what are the most accepted ways to answer your top 100 queries. Because if you think about it, like your employee base is not going to have more than a hundred different types of questions, right? Then you get into the really long tail of the completely odd and insane question that you probably want a human being to answer. Right. And so.</p><p>You build that, you massage it, you train it, and guess what? You put it together with this whole notion of, again, the castle defense from a security standpoint, with this really, really methodology-based approach to how information should be accessed and presented, and it works. And you can put it in front of tens of thousands of employees. And so, again, we invested in EMMA. We brought the same technology to our platforms.</p><p>That is actually one of the use cases that they&#8217;re working with us on to complement some of the work that you do on ADP assist. But I hope it sort of like highlights how much work you have to do to get to something that sounds very simple to a lay person. might just be like, yeah, let me just load all my documents in the chat. You should be able to tell me, right? Cause we do that in our personal lives, but in an organization, that&#8217;s not the same reality. Something that might seem extremely easy and simple is actually very complex.</p><p>John Sumser (26:02)</p><p>So you are saying something that I don&#8217;t think I&#8217;ve heard anybody say before, and that is that an enterprise implementation of say HR is best understood as an org chart sensitive thing with role and level and task definitions by at least</p><p>division and probably down to a department level kind of thing. And that&#8217;s a very different way of thinking about how this implements. Most of the stuff I see assumes that there&#8217;s a universal quality to be found somewhere. I find that remarkably, it&#8217;s the best idea I&#8217;ve heard this week.</p><p>Usman Khan (26:50)</p><p>Well, I&#8217;m glad to hear it. We&#8217;re just at the start of the week, so you might hear about something better.</p><p>John Sumser (26:54)</p><p>Hahaha</p><p>That&#8217;s good, that&#8217;s good. So to wrap this all up, if you&#8217;re an HR buyer, what kind of questions should you be asking vendors?</p><p>Usman Khan (27:06)</p><p>think what you need to be asking vendors is, I&#8217;m gonna give you a slightly longer answer, because my media trainer would go absolutely crazy. The longer thing that I would give you is really go deep into the details of what you&#8217;re able to do. Because there&#8217;s a high tendency to...</p><p>for providers to focus on like the top line marketing of, you we have a thing for this and we have a thing for that and we have a thing for that. Get deeper into how effective that is. The HR buyer knows what their top.</p><p>There&#8217;s no such thing as top five problems. Top five problems are just like things that are on fire. You generally have 20 to 30 problems and sometimes more depending on the size and scale of your organization. As an HR buyer, you should know your top 20 to 30 problems that exist across that same taxonomy of needs that I just described. Departments, people, role types, geographies, and so on and so forth.</p><p>And think about where you could get the highest amount of agency with AI, whether it&#8217;s reducing the service load on your HR help desk, whether it&#8217;s case resolution for the same thing, whether it&#8217;s employee experience, whether it&#8217;s talent elevation, whether it&#8217;s faster time to roll from a recruiting standpoint. I think you have to really get into the deep details of what does it do, how well has it been trained, and essentially,</p><p>How are your teams going to react to it? Which again, goes back to our general philosophy in life at ADT is like, have to really take your time with Dustin and stuff on it because in some places you&#8217;re going to find fascinating technology that people have built. In other places you&#8217;re going to find that you have maybe 25 % of the promise and the rest of it is just like, do you really want to pay for 25 % of what you think you should be getting and burn your team&#8217;s time on it?</p><p>or just focus on the things that are the most important in that list of 25 to 30 things. Does that help?</p><p>John Sumser (29:08)</p><p>That helps, that helps. It raises a question of...</p><p>how do you get smart enough to figure that out? I can tell you what my 20 problems are, but if I go into a demo of some kind...</p><p>You have to be particularly aggressive in a demo environment to get the kinds of answers that you&#8217;re talking about. It requires an ability to ask the right questions, understand the answers, and move forward and keep the process on track rather than on the talk track. And I don&#8217;t know that HR buyers are trained to do that.</p><p>It&#8217;s not very nice.</p><p>Usman Khan (29:45)</p><p>So.</p><p>I completely understand it, right? Because expertise is built in directions that take you deep in terms of the effectiveness of what you need to do. This is another thing too, right? I think the thing that our HR audience in 2020 should take away from hopefully this conversation is that everyone needs to get a lot more smarter about technology because it is going to blend with what you do, right?</p><p>And having a point of view is important. And so if you don&#8217;t have a very, very deep point of view right now, I think there&#8217;s couple of ways to get there. And the problem is different for every type of buyer. So we talked about the variability in terms of the client base that we work with at ADP. We might be dealing with 50 employee companies in our small business division, or might be dealing with mid-market companies with 50 to 1,000 employees, or the up market, which is more than 1,000.</p><p>If you&#8217;re an up market HR buyer or even an upper mid market one, you generally have a technology team in your organization. I think this is the time to forge closer alliances with the technology team because they&#8217;re thinking through the same issues on the operational side as far as deploying technology for your business needs goes.</p><p>They might be working on your logistics systems. They might be working on your finance systems. Get them thinking about your HR systems. Get them thinking about effectiveness of agents because all of these organizations have teams. have CIOs, have CTOs. How does your CIO or CTO think about success with agents? What is a good framework and measure input to outcome for agents? And you don&#8217;t have to go all the way. I think you just have to start to pull them into the conversation a little bit.</p><p>so that they&#8217;re also part of the discussion. I think the table stakes for most HR buyers are, hey, is this thing going to keep me compliant? Is this going to accomplish the large operating model question that I have and the things that I&#8217;m trying to achieve on a functional level? The AI stuff right now, feel that it&#8217;s still, some of it&#8217;s seeping into the core of why you should buy the system, but it&#8217;s more of a benefit.</p><p>I think the first thing you look for is just lack of pain with running a particular system. And AI does have a role to play there. So as you get from that spectrum of like need to have, must have, that these things would add a lot of productivity, partner up. And then if you&#8217;re in the lower mid market, I think the market changes a little bit because you don&#8217;t have a lot of customized solutions. You have things that work outside of the box. The risk of trying them out, the sandboxes are easy.</p><p>John Sumser (32:05)</p><p>you</p><p>Usman Khan (32:15)</p><p>I think just go experiment, spend time with your peer cluster. I love that HR people talk and they always have points of view on different software. Share that out. So again, long response to what was the short question.</p><p>John Sumser (32:29)</p><p>Well, I was hoping for a long response. So this has been a great conversation and we&#8217;re going to close it up here in a second. Have you got anything that we missed that you want to be sure we cover?</p><p>Usman Khan (32:41)</p><p>I we covered it a little bit, but I would say don&#8217;t have FOMO. Don&#8217;t have fear of missing out. Because sometimes, and a lot of times, there&#8217;s an advantage to observing what happens in the market, whether it&#8217;s something that you want to do or a trend that you were looking at, and then taking action rather than being the first person out the door with it.</p><p>If you are in an organization that loves to take risk and loves to be at the front of it, how about it? But if you&#8217;re in the middle of the road be prudent because I don&#8217;t think there&#8217;s anything that we&#8217;re massively losing by As as buyers taking the buyer perspective But by waiting to see how things settle how things play out</p><p>So you shouldn&#8217;t be in an urgency to deploy AI in a manner that might not be well thought out. Think it through. Really, really, really design it. And so that&#8217;s what I would say. Whether we, and we apply it here, whether it&#8217;s investing in the company, whether it&#8217;s building a new product, whether it&#8217;s like working on roadmap with other ADP teams, I think it&#8217;s a principle that&#8217;s as quite well.</p><p>John Sumser (33:42)</p><p>That&#8217;s awesome. That&#8217;s awesome. I don&#8217;t think many tech executives are willing to say something like that. patience is the way to the solution. That is distinguishing. That is distinguishing. So thanks for taking the time to do this. I really appreciate it. It was a good conversation.</p><p>Usman Khan (34:03)</p><p>Thank you, John. I enjoyed sharing my thoughts and hearing yours. I think you asked some awesome questions.</p><p>John Sumser (34:09)</p><p>Great, great. So you&#8217;ve been listening to HR Examiner Podcast, and we&#8217;ve been talking with Oz Khan, who is ADP&#8217;s head of ADP Ventures and part of a strategic investment firm. Thanks, and we&#8217;ll see you next time.</p><p></p><p><strong>Keywords<br></strong>AI, enterprise software, security, bias, HR technology, precision work, investment, innovation, data management, vendor questions</p>]]></content:encoded></item><item><title><![CDATA[HREx 1.08 Jonathan Duarte]]></title><description><![CDATA[Creating Chatbots Before They Were Cool]]></description><link>https://www.hrexaminer.com/p/hrex-108-jonathan-duarte</link><guid isPermaLink="false">https://www.hrexaminer.com/p/hrex-108-jonathan-duarte</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Wed, 11 Feb 2026 13:44:59 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/186907355/570c4d46ab2463364719a1e2be114576.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Summary</p><p>In this episode of the HR Examiner podcast, John Sumser speaks with Jonathan Duarte, a pioneer in chatbot technology for recruitment. They discuss the evolution of chatbots, the challenges of AI in HR, the importance of data quality, and the limitations of SaaS solutions. Duarte emphasizes the need for human insight in automation and the risks associated with AI hallucinations. The conversation also touches on the historical context of Taylorism in recruitment and the future of conversational interfaces.</p><p><strong>Takeaways</strong></p><ul><li><p>Chatbots have evolved but still rely on scripted responses.</p></li><li><p>AI in HR faces challenges due to the need for deterministic answers.</p></li><li><p>Data quality is crucial for effective recruitment processes.</p></li><li><p>SaaS solutions often fail to meet the unique needs of businesses.</p></li><li><p>Understanding Taylorism is essential for process automation in recruitment.</p></li><li><p>Conversational interfaces will improve but won&#8217;t cover all use cases.</p></li><li><p>Security risks in AI must be addressed, especially in sensitive areas like healthcare.</p></li><li><p>Hallucinations in AI responses can lead to misinformation and legal issues.</p></li><li><p>Human insight is irreplaceable in understanding complex processes.</p></li><li><p>Automation should not replace the knowledge of experienced employees.</p></li></ul><p><strong>Time Stamps/Chapters</strong></p><p>00:00 The Evolution of Chatbots in Recruitment</p><p>02:36 Challenges of AI in HR and Recruiting</p><p>05:37 Data Quality and Its Impact on Recruitment</p><p>08:28 The Limitations of SaaS in Recruitment Processes</p><p>11:15 Understanding Taylorism in Modern Recruitment</p><p>14:11 The Future of Conversational Interfaces</p><p>17:05 Security Risks of Large Language Models</p><p>19:53 The Dangers of Hallucinations in AI Responses</p><p>22:37 Navigating Data Integrity in AI Systems</p><p>25:25 The Role of Human Insight in Automation</p><p>28:16 The Future of Work and Process Automation</p><p><strong>Transcript</strong></p><p>John Sumser (00:00)</p><p>Hi, I&#8217;m John Sumter and this is the HR Examiner podcast. Today we&#8217;re going to be talking with Jonathan Vorkate. Jonathan is...</p><p>Among a bunch of other things, he&#8217;s one of the real pioneers in the chatbot craze that we are in the pros of right now. As long ago as 2016, he was building chatbots and has grown that view of the world as the technology has evolved. And so it&#8217;s really going to be it.</p><p>Great conversation, because we&#8217;re talking to somebody who&#8217;s been there on the ground floor for a decade now. How are you, Jonathan? Introduce yourself.</p><p>Jonathan Duarte (00:39)</p><p>Yeah. &#8275;</p><p>yeah, fantastic. Thanks John. And &#8275; so I come from I think now 30 years in recruitment tech built one of the first job boards in 96. &#8275; Then built job distribution so that data play on moving jobs between job boards and ATS is then built a small business background screening company called Good Hire was on the.</p><p>founding team for that organization. And then while we were doing that, there was this little company at the time called Uber who needed to hire 600,000 drivers in 180 days, which they did all over text messaging in 2016. No one talks about it. Good thing is I was one of the guys building parts of the API in the first part.</p><p>for the background check on drivers. So I saw this early on, this conversation happening. No one was calling it a chat bot because it was all scripted, right? And my co-founder at the time said, let&#8217;s just try this for job search and see if people would do it. 10 years ago, last month, we built the first chat bot in the United States for job search. went viral.</p><p>after two Facebook posts to 103 countries in 30 days. So I quit my job and decided conversation is the way we&#8217;re actually gonna communicate and do UI in the future.</p><p>John Sumser (02:15)</p><p>That&#8217;s awesome. So talk a little bit about how it&#8217;s evolved since then. Because on one level, you&#8217;d be excused for thinking that AI is here and AI is what chatbots are all about. But when you look under the cover, most of what you see is the same scripted stuff that you were doing 10 years ago. &#8275;</p><p>Jonathan Duarte (02:36)</p><p>Yeah, yep.</p><p>And I think it&#8217;s always going to be that way &#8275; because there are things that there&#8217;s rules in a database and where this data comes from that &#8275; sometimes it&#8217;s just better to just get the deterministic response rather than trying to create a generative response on something. And I think where we&#8217;ve come from in last 10 years, so</p><p>I built the, or was product manager of building the first conversational chat bot for Wells Fargo &#8275; on a project. Then I left there and built the first contact center, healthcare chat bot for Kaiser Permanente. And again, both of these huge enterprise, huge security issues. And they, on the enterprise, they haven&#8217;t moved that much. And the reason why is because the getting the data</p><p>back to the individual through the multiple solutions is really hard to do. And there&#8217;s so many data exceptions that to really have the kind of conversation we want, &#8275; human-like as much as possible, it&#8217;s going to take a while unless you own the entire pipeline on the back.</p><p>John Sumser (03:53)</p><p>I wonder if it&#8217;s even possible to get it right. Because particularly in HR and recruiting, probabilistic answers really aren&#8217;t good enough. So you say, how much vacation time do I have? If the answer is, I think it&#8217;s about a month.</p><p>Jonathan Duarte (04:04)</p><p>Yep. Yep.</p><p>John Sumser (04:10)</p><p>It&#8217;s not going to work, right? There are in business, most of the answers that matter are binary. They&#8217;re not probabilistic. There&#8217;s, when am I going to get to be the CEO of the company? And it&#8217;s like, well, if you do these five things, you might get there in 30 years. That&#8217;s where probabilistic conversation is useful. But did you get my resume and what do you think of it?</p><p>Jonathan Duarte (04:10)</p><p>Yeah.</p><p>Exactly.</p><p>John Sumser (04:40)</p><p>Well, I&#8217;m not sure we really got it, you know, &#8275; that sort of.</p><p>Jonathan Duarte (04:45)</p><p>Great example,</p><p>right?</p><p>John Sumser (04:48)</p><p>Yeah, yeah, yeah. That can&#8217;t work. And there is nothing that I&#8217;ve seen that actually is a guaranteed fix for getting a scripted tool to behave like it sounds like a human, right? There&#8217;s just too much error rate inside of it. And you don&#8217;t really hear people talking about error rates, but if the...</p><p>If the error rate generally runs around 20%, right, 80%, that means one out of every five letters is wrong, one out of every five words is wrong, one out of every five answers is wrong. And that&#8217;s not good enough. If you had an employee who behaved like that, you&#8217;d fire him.</p><p>Jonathan Duarte (05:37)</p><p>Yeah, I think another example of that is from a business perspective. So if you look at it, why would a company invest in a employee self-service chatbot to do something that the UI can do that&#8217;s deterministic? Like how many hours do I have for PTO, right? Yes, the cost to get that answer is training the user.</p><p>building the system, updating it, but those are known costs. We&#8217;ve had those for 50, 60 years, right? They&#8217;ve changed over time, but those are known costs. And that&#8217;s only one thing that we train the user on. We train the user on, hey, like I became a part-time ski instructor at Tara at Palisades, Tahoe. I went through literally 20 hours of training, the ROI to get someone up to speed.</p><p>&#8275; you know, I look at it from a business perspective, the ROI to get an employee up to speed doing ski school is so expensive, but we&#8217;re also the revenue, one of the major revenue streams of the company. So if we&#8217;re talking about a manager management executives, those, that training, if we know we can get it deterministic and there are a thousand other people around the company that can just say, Hey, John, how do you get to my payroll?</p><p>How do you do? That&#8217;s easy. We already have a system in place that works. So there&#8217;s really no reason to upend that system and get a 20 % failure rate. That&#8217;s why AI is not going to be as spectacular for deterministic issues. It&#8217;s great for helping write emails. But what&#8217;s next?</p><p>John Sumser (07:29)</p><p>Yeah,</p><p>you know, you know, one of the one of the interesting experiments I spent a bunch of time around the time that you were building out chat bots with a company called Socrates and Socrates was trying to build.</p><p>a chat interface to employee handbooks. And what they found was, first, the data isn&#8217;t any good because nobody ever updates their employee handbooks. And if the company has more than one location, the policies are different between locations. So you can&#8217;t actually go into big company X&#8217;s chat bot for HR and say,</p><p>Can I wear shorts to the office? Because yes, you can in Florida, Texas, and California, but no, you can&#8217;t in Minneapolis, right?</p><p>Jonathan Duarte (08:25)</p><p>Yeah, and that&#8217;s not even</p><p>a global enterprise. That&#8217;s just a company who has people in Colorado, Florida, and California, and New York. That&#8217;s not big. That&#8217;s not a huge enterprise.</p><p>John Sumser (08:28)</p><p>Right.</p><p>Yeah, but the actual way that companies are run is way different than the SAS model would have you think. One of my soap boxes these days is the reason spreadsheets remain popular is because SAS software sucks. &#8275;</p><p>Right? SaaS software is the idea that everybody fits into the same workflow. And if you don&#8217;t fit into the same workflow, then you&#8217;re going to have to fix it yourself because we&#8217;re the software company and this is the workflow. And particularly in important, critical important things like recruiting, &#8275; companies do it really differently. They do it...</p><p>Jonathan Duarte (09:02)</p><p>Yeah. Yep.</p><p>I&#8217;ve</p><p>never talked to two companies that hire the same part-time worker the same way or any worker.</p><p>John Sumser (09:32)</p><p>Yeah, so underlying all of the dissatisfaction that&#8217;s in the market with recruiting software is this problem that there isn&#8217;t a universal tool that will work for everybody everywhere.</p><p>Jonathan Duarte (09:46)</p><p>Yeah, yeah. I think that&#8217;s the hardest part about this. I also, I had listened to your talk with Matt Charney and really interesting they brought up Taylorism because at UCLA, I studied Taylorism from one of preeminent professors in the space. And it was phenomenal because I looked back and it just turned out that</p><p>John Sumser (10:00)</p><p>Uh-huh.</p><p>Jonathan Duarte (10:14)</p><p>I actually saw Mary on my professor who was &#8275; married to John Lithgow. &#8275; They were on the Oscars last year. And I&#8217;m like, my God, I can&#8217;t believe. I just thought back in my life, since my first career role, all I&#8217;ve been doing is implementing Taylorism my entire life. Process automation.</p><p>John Sumser (10:39)</p><p>So one of the most interesting days I spent, you know, Jerry Crispin. Jerry Crispin is a graduate of the Stevens Institute of Technology, which sits on the Hudson River looking at New York City. I think the town is Jersey City. And there is a Frederick Taylor Library that we went into and hung around a bit because</p><p>Jonathan Duarte (10:55)</p><p>Mm-hmm.</p><p>John Sumser (11:06)</p><p>I have a fascination with Taylor, Jerry has a fascination with Taylor, and it&#8217;s worth figuring out how to get there to see the library.</p><p>Jonathan Duarte (11:10)</p><p>Wow.</p><p>Yeah, yep. It was weird. My dad went to Harvard Business School and one of the books he had on a shelf was the scientific management book for Taylorism. And I had no idea. I know I had thumbed through it before, but I had no idea that that&#8217;s exactly what my career was going to be.</p><p>John Sumser (11:29)</p><p>Uh-huh.</p><p>Jonathan Duarte (11:41)</p><p>different in recruiting, it&#8217;s different in HR, but it really comes down to, have a technical mind, I&#8217;m not a programmer, but &#8275; understanding the process flow of data, &#8275; that I think is one of the hardest skills for most people to get to actually understand business and understand the data flow.</p><p>John Sumser (11:59)</p><p>Yeah, well, I agree. And there&#8217;s a whole question. One of my favorite rabbit holes these days is what is data quality and how do you tell? And when you look at the earliest stuff that I saw about trying to figure out recruiting,</p><p>was all about making sure that the data was clean and it was never clean. &#8275;</p><p>Jonathan Duarte (12:28)</p><p>Whoever</p><p>they&#8217;re thinking they&#8217;re going to get clean recruiting data and they&#8217;re still writing checks for it. I&#8217;m like, good luck.</p><p>John Sumser (12:38)</p><p>Yeah, well, but so what do you do if data is the heart of the answer? &#8275; What do do if you can&#8217;t clean the data? How do you deal with that?</p><p>Jonathan Duarte (12:48)</p><p>I think we have to look at like holistically going back to recruiting that you and I, like I graduated from UCLA in 93 with a resume on a piece of Strathmore paper, right? So if you, and this is again, just goes back to Taylorism, forget the technology or anything else, just go back to the process. It&#8217;s finding and having candidates that are qualified &#8275;</p><p>available and interested at that time. And then we can throw computers in there to increase the engagement. We could throw computers in there to match. We can do all these things. But the way it&#8217;s set up right now is there&#8217;s so many vendors in that process that we just don&#8217;t get the data. I built this original source of Hire Protocol in 2001. And</p><p>many of the job boards and ETS companies are still using it, but they, I&#8217;ve seen companies where they still have a dropdown. How&#8217;d you hear about us? Like, Like, I, and it gets, and then I, you know, I don&#8217;t ask myself that question. Why anymore? Because I know that the TA teams, &#8275; you know, what is it? I think a director of TA last 18 months, typically. So there&#8217;s no continuity.</p><p>John Sumser (14:11)</p><p>Right.</p><p>Jonathan Duarte (14:14)</p><p>HR hasn&#8217;t been as strategic as I think the businesses need them to be. And talent, there&#8217;s lots of people who could say HR isn&#8217;t a technical area. I&#8217;d say you don&#8217;t know what you&#8217;re talking about. It&#8217;s probably the biggest because it is the foundation of payroll. And if payroll is not done right, you don&#8217;t have a company.</p><p>John Sumser (14:34)</p><p>&#8275; So that leads me to a question about how long do you think it&#8217;s going to be before it&#8217;s possible to have a real conversational interface?</p><p>Jonathan Duarte (14:46)</p><p>Um, I don&#8217;t think we&#8217;re ever going to see a hundred percent of use cases. I think in.</p><p>We&#8217;ll get there, but we have to be really clear about use cases. &#8275; for instance, when we were building the Kaiser solution, we were really clear about we can only use conversational on non-health care related issues. Number one, like we can&#8217;t try to have the system start billing, start doing &#8275; interviews &#8275; or scheduling.</p><p>Those just needed to be deterministic and just done. Don&#8217;t break what&#8217;s working. It&#8217;s too massive of a scale. But we could answer &#8275; level one support questions. &#8275; How do I log in? How do I reset my password? And my view of all of this is that it&#8217;s, we&#8217;re gonna see this over the next five to 10 years is that level one support of how do I find my PTO? That little chat window.</p><p>can give you instructions within Workday or within SAP. So the really simple solutions that we spend, we do spend a lot of time training people on, we&#8217;ll be able to normalize that down to some simple instructions on that platform. But we&#8217;re not going to see agents of agents that can communicate across big data sets very quickly.</p><p>Because getting the data, number one, is a core issue. But then understanding where the data is and the exceptions to the data almost is so expensive that the solutions just aren&#8217;t going to match up right away.</p><p>John Sumser (16:35)</p><p>Yeah, I guess I... Go ahead.</p><p>Jonathan Duarte (16:35)</p><p>So I think we&#8217;ll get there, but simple things.</p><p>John Sumser (16:38)</p><p>Yeah, okay. So, so one of the things I&#8217;ve been puzzling about, how do you get a large language model to remember in my applications, not in, not in the use of chat GPT, but I want to go over here and do some sort of chat project. And the first thing that you went into is that it doesn&#8217;t have a memory. So you can&#8217;t.</p><p>Um, actually get it to do the same thing twice in a row. Cause it doesn&#8217;t remember how to do it. How do you fix that?</p><p>Jonathan Duarte (17:11)</p><p>Yeah, yep.</p><p>If I had that answer, swear I&#8217;d be getting one of those $10 million checks from Metta, right? Or something like that. I think the &#8275; issue, and we have found this, some pretty smart people in the recruiting space have found this too, that you could take the exact same set of resumes, even on the same model, but do it from a different country or do it from a different &#8275; state.</p><p>which is gonna mean you&#8217;re gonna go into say, chat GPT from two different locations and two different servers and the answers come out radically different. I think this point is that because it&#8217;s probabilistic and we aren&#8217;t setting criteria upfront, we don&#8217;t own what&#8217;s gonna come out on the backend, which is why.</p><p>Anyone says, hey, we&#8217;ve got an end to end chat bot. We&#8217;re using open AI or some other LLM to match candidates. I&#8217;m like bankruptcy. Like that is never going to fly with the VP and CHRO and the CIO.</p><p>John Sumser (18:20)</p><p>Yeah, have you, this is a great thread. Have you spent much time thinking about the security risks in using large language models?</p><p>Jonathan Duarte (18:29)</p><p>&#8275; there is, there is a couple. haven&#8217;t spent that much time, &#8275; on the security piece, but the memory piece that you were mentioned before I have, like our company were used in open AI and chat GPT. we have it connected into our own enterprise solutions as well, but, &#8275; we&#8217;re really small scale. We&#8217;re not a Deloitte or something like that. So be able to prompt up and use.</p><p>an open AI or any of these other models with your, your kind of, would say your, your memory of, what solutions are we supposed to be doing for this product market fit exercise for marketing in P and G? I don&#8217;t know how they&#8217;re going to solve that right now. That that&#8217;s going to be the million dollar question. The security piece. I think most of these guys have it down except for.</p><p>&#8275; from a login perspective, but I think the security really comes back to it&#8217;s really the risk, not the security of the data, but it&#8217;s the risk of, because they can do private &#8275; LLM access and stuff like that, but the &#8275; real risk is the probabilistic response, especially in healthcare. You can&#8217;t give a somewhat correct answer.</p><p>John Sumser (19:53)</p><p>Yep, yep, that keeps coming back as the bug move. But I also, you know, I did some research over the last year and I&#8217;m up to about 45 different ways you can hack a large language model. And it has nothing to do, most security people focus on the machine and the intersection with the machine. And when you hack an LLM, one of</p><p>Jonathan Duarte (20:14)</p><p>Mm-hmm.</p><p>John Sumser (20:20)</p><p>paths you can take is towards hacking the machine. But another one of the paths that you can take is towards causing it to say something stupid that the company has to pay for. And so &#8275; there&#8217;s plenty of legal precedent now that says if your chat bot says you&#8217;re going to stand on your head and face north, you&#8217;ve got to do it.</p><p>Jonathan Duarte (20:33)</p><p>Yeah? Yep.</p><p>Yeah, think, and you know what is a great example of that, that people are using every single day marketing teams are, how do you, know, previously we would just have search engine optimization as a way to market your company. So when someone did a search on Google, you would show up. And then we had SEO guys that like myself that would go tell Google search engine.</p><p>all about our company and we do it better than the competitor. We&#8217;re doing the exact same thing with large language models. We are going in and telling it who the best is and who the worst is. So our competitors aren&#8217;t showing up. So we&#8217;re hacking the answer. We&#8217;re not hacking the actual hard, hardwired solution in this way. And it&#8217;s very public that it&#8217;s happening.</p><p>John Sumser (21:40)</p><p>Yeah, so what do you suppose is going to happen there? Because if you&#8217;re not doing that, you&#8217;re kind of stupid. It may not be an honest way to do business, but if you&#8217;re competing for business and you don&#8217;t use the same tools everybody else does, it&#8217;s dumb.</p><p>Jonathan Duarte (21:48)</p><p>Please.</p><p>Yeah, I think...</p><p>John Sumser (22:05)</p><p>But that means that</p><p>all of the data in all of the systems is already starting to be too corrupt to use.</p><p>Jonathan Duarte (22:11)</p><p>Yeah. And it&#8217;s, and we&#8217;re dumbing down to the middle, right? Like, &#8275; I had &#8275; a friend in the industry, said, Hey, we created this great marketing thing for our clients and we&#8217;re going to help them build their marketing strategy using, &#8275; open AI. I go, if I were your boss, I&#8217;d just throw that in the trash can. And he goes, but why? I go, because it&#8217;s, you&#8217;re just using dumbed down, dumbed down average.</p><p>You&#8217;re getting more average data out of chat GPT. You&#8217;re not getting marketing needs to have teeth, not gums. So you can&#8217;t use that for a marketing project. I mean, you can get the competitive research, all that stuff. Great. But it&#8217;s not going to create, just do it. It&#8217;s going to say, &#8275; just do it after Nike and everyone else has copied that a 50 billion times. But I think you&#8217;re.</p><p>John Sumser (23:01)</p><p>Right.</p><p>Jonathan Duarte (23:06)</p><p>The question, how do we? How do we stay away from hacked data inferences of like I say, hey, my company is better at text recruiting than somebody else&#8217;s right? &#8275; The way to do that as we&#8217;re using it is we know that we are going to. &#8275; It&#8217;s almost kind of a scripted type of solution. We only want chat GPT to understand the outcome and provide a humanistic.</p><p>response to it. But we&#8217;re creating a deterministic response, but we want chat GPT or one of the language models is to generate the human response to that specific answer. But we&#8217;re going to force the answer to it and just use it for the generative, not the research.</p><p>John Sumser (23:54)</p><p>So that gets through. There&#8217;s a kind of a technical question that I&#8217;ve been wondering about that you&#8217;ll probably have a great idea for. &#8275; And that is...</p><p>If you have a piece of data and you run it through the LLM, &#8275; the LLM is going to do its thing with it. And so because LLMs don&#8217;t particularly understand math, if your text says 1, 2, 3, 4, 5, 6,</p><p>and you want to know the next thing, it&#8217;s as likely to say 711 as 72 or something. It doesn&#8217;t &#8275; know. It doesn&#8217;t understand. so you get answers that are logical from the prediction of what word might come next that don&#8217;t make any sense at all. And so if you drag the data through the LLM,</p><p>You get that stuff applied to it. If you route around the LLM, then you end up with a layer where you have language coming from the tool and the data coming from some other place and you merge them afterwards. And I haven&#8217;t seen anybody who can tell you if those two things merged make the same viewpoint. Right? So the answer is six.</p><p>Jonathan Duarte (25:25)</p><p>Mm-hmm.</p><p>John Sumser (25:27)</p><p>But the narrative from the LLM says seven. &#8275; And I don&#8217;t know how you reconcile that because you almost have to have a feedback loop that forces you to drag the data back through the LLM in order to make sure that the answer and the data are aligned on the other side.</p><p>Jonathan Duarte (25:31)</p><p>Mm-hmm.</p><p>Yep, that&#8217;s what we call now, like there&#8217;s tools out there called, for best way of saying it, they&#8217;re anti-hallucination tools. All right? &#8275; They&#8217;re expensive, they&#8217;re really for enterprise only at this point, &#8275; but to solve your original problem you were mentioning about, &#8275; say benefits, say you have a global organization, you&#8217;ve got &#8275; benefits by region, &#8275; by employee type.</p><p>multiple languages, multiple countries. If, and I know this is just a hypothetical, but if all that data was up to speed, &#8275; how you can do use these RAG systems now is you can go retrieve the data by the employee&#8217;s location in that whatever database it&#8217;s in. And again, this is very hypothetical, but you could retrieve that data.</p><p>John Sumser (26:29)</p><p>Hmm.</p><p>Jonathan Duarte (26:45)</p><p>put it through the LLM to come back with the response. But before the response goes back to the user, it is purposely going back to the document that the RAG came up with or the LLM came up with and verifying that that context is accurate. So those vector systems, &#8275; you&#8217;re seeing companies implement this stuff, but it is truly at the Fortune 100 level right now.</p><p>John Sumser (27:13)</p><p>Yeah, and so I understand why people think that would work, but let&#8217;s say you get to the other side and the LLM and the document don&#8217;t agree and you have to redo the work.</p><p>Jonathan Duarte (27:27)</p><p>Mm-hmm.</p><p>John Sumser (27:29)</p><p>You&#8217;re going to have the same probability that it comes out bad as you had the first time through. Even if you validate the data out, can&#8217;t kill.</p><p>Jonathan Duarte (27:34)</p><p>Yes.</p><p>John Sumser (27:40)</p><p>You can&#8217;t kill hallucinations because everything that a large language model does is a hallucination. And, and, um, you know, I, do you know who my room, the writing is?</p><p>Jonathan Duarte (27:51)</p><p>No, I&#8217;ve heard the name, but I&#8217;ve never met</p><p>John Sumser (27:54)</p><p>So she was one of the co-founders of OpenAI. And when all of the noise happened a couple of years ago, she fled with all the rest of the smart people. And she went off and started her, on this issue, she went off and started a company called Thinking Machines, which is another billion dollar AI company. And...</p><p>The first thing that she did was this experiment. She went to a &#8275; conversational interface and ran the same query a thousand times. And what she got back from running it a thousand times was 80 different answers. those answers, some of those answers were right, but most of the answers were wrong in little tiny ways.</p><p>Jonathan Duarte (28:45)</p><p>Mm-hmm. Mm-hmm.</p><p>John Sumser (28:45)</p><p>They weren&#8217;t</p><p>just another way of saying the same thing. You can group all those that were just purely another way of saying the same thing. But of the 80 different answer types, something like 60 or 65 of them were noticeably wrong. And this is as good as it gets. she&#8217;s trying, the whole job of thinking machines is that she is trying to build tools that inhibit hallucinations.</p><p>But really, that&#8217;s what these systems do. They don&#8217;t have any attachment to meaning or any attachment to the world. They just are probabilistic guessing machines.</p><p>Jonathan Duarte (29:26)</p><p>Yep, you know, and that is a little segue in the conversation. I think that&#8217;s why anyone who&#8217;s saying that, hey, we&#8217;re going to do this conversational AI at scale, &#8275; it doesn&#8217;t really know what this doesn&#8217;t really know technically that it isn&#8217;t going to work. Like we&#8217;re not going to be able to in HR, we&#8217;re not going to be able to go to SAP. Say you have SAP in one division, you&#8217;ve got Oracle in another vision and you got</p><p>in division, and then you&#8217;ve got Workday in another division. There&#8217;s no way there&#8217;s going to be a UI that goes to each of those platforms comes back with the right answer. Maybe not even in my lifetime, but I think everyone is just going to cut the umbilical cord on that concept pretty quickly. I think what we&#8217;re going to start seeing is because everyone&#8217;s investing so much money, and I think someone has said that the amount of money that had been invested by the United States in AI in this last</p><p>three years was more &#8275; than the GDP or consumer spending in the United States. And the only time that has ever happened was when we had the railroads. if you think, yes, this is coming, it&#8217;s CIOs are investing in smarter emails at the moment. But I think it all goes back to Taylorism. What are the pieces that we can do, but not individually?</p><p>What are the pieces we can do that we can automate things and determine should we have been even writing emails in the first place?</p><p>John Sumser (31:01)</p><p>Right. Right. That&#8217;s that&#8217;s a real question. And I think just to drag it back to your history and your expertise, I think you have enough time at the front end of how things evolve so that you know that. Automating the same old BS doesn&#8217;t really get you anywhere. It just gets it just gets you faster, stupid. It doesn&#8217;t get you. It doesn&#8217;t get you faster, smart. And in order to get the faster, smart.</p><p>you have to be able to completely reimagine the process. And I don&#8217;t know that our educational system trains people to reimagine processes. So I don&#8217;t know where the people who are going to reimagine processes come from.</p><p>Jonathan Duarte (31:33)</p><p>Exactly.</p><p>Yeah, it&#8217;s really tough. think &#8275; I was very early on in my career, was very, very fortunate to have plenty of mentors, even yourself in the recruiting space and Jerry. I didn&#8217;t know we were all thinking about Taylorism the same way even 30 years ago. But &#8275; when I learned early on at Gateway 2000, as I think I mentioned before, we&#8217;ve rebuilt</p><p>their sales order processing, their purchasing, their inventory, their financials, their manufacturing, their &#8275; customer support system. An entire year we rebuilt it all. And I was fortunate to get, that was 24 years old. I was fortunate to see all four corners of the business, all the data that was required, how to merge all this stuff. And it just came down to me that &#8275; it is really hard to have someone who knows</p><p>all four corners of the business, which is what is required to do process automation.</p><p>John Sumser (32:49)</p><p>Yep. Yep. And we have such deep silos that nobody knows all the pieces of business.</p><p>Jonathan Duarte (32:55)</p><p>Yeah. Yep. And I think when we look at, who&#8217;s getting laid off with AI and all this other stuff, just, I, I just go back to the people who know the business are the most critical assets right now. And if you&#8217;re not as it is CHRO, if you&#8217;re not finding a way to keep those people on, they may have, they may not be able to spell AI who cares, but they know the process. They know your business.</p><p>They know your customers. They know where the data is. Those are the people you have to keep because as you&#8217;re trying to do workflow process automation, which is going to happen, they&#8217;re the people who know the answers.</p><p>John Sumser (33:38)</p><p>One of the things I think about this emphasis on workflow process automation is this is basically a private equity approach to solving problems. If you automate the process, then you can fire the people. &#8275; so you automate the process, you fire the people.</p><p>Then what do do? Then what do you do the next time that you have to rethink the process? I think you&#8217;re stuck because you don&#8217;t have anybody who understands actually how it works. And that means that the logical conclusion of automating your workflows is that you&#8217;re going out of business.</p><p>Jonathan Duarte (34:21)</p><p>True. here&#8217;s the thing that, you know, I see there&#8217;s so it&#8217;s so interesting. I watch a lot of YouTube videos because I just geek out on this AI stuff and watch all these guys are going to create these processes for businesses and they send you cold email and they say, hey, we&#8217;re going to automate your recruiting for you. And, you know, for $10,000, it&#8217;s done. It&#8217;s like. Yeah, but I have like five different processes. And then we had a merger. So everything you had.</p><p>is worthless. So it&#8217;s all about understanding that businesses change. If they don&#8217;t change, your business doesn&#8217;t exist. mean, Kodak doesn&#8217;t exist in its former self anymore. And I think stats are something only like 20 % of the Fortune 100 were in the Fortune 100 &#8275; or 10 years ago or something like that, or maybe 20. So the natural</p><p>John Sumser (35:16)</p><p>Right. Get out there.</p><p>Jonathan Duarte (35:20)</p><p>Instinct is that there&#8217;s going to be change. So that&#8217;s why I say you need to know the people who know the product Know the customer and know where the data is Because if they do they could change with it because you can&#8217;t hard code your business There&#8217;s no such thing</p><p>John Sumser (35:35)</p><p>Well, so that&#8217;s the paradox, isn&#8217;t it? Because if you need the people that you&#8217;re going to automate out of the job, why automate in the first place?</p><p>Jonathan Duarte (35:45)</p><p>Yeah, I think what we see is we automate the seamstresses. We&#8217;d automate the phone people, know, were the people who plugging stuff in. &#8275; We don&#8217;t look back and say, God, I wish, you know, I could call John and he could plug me into Jerry on a phone line. We&#8217;ve gone past that. So I think there are going to be areas where we can. And there&#8217;s selective executive recruiting. That&#8217;s you&#8217;re not going to, no one&#8217;s ever automating that.</p><p>It&#8217;s still a gut feel. And from talking to somebody else, no computer is going to call John and say, Hey, John, what do you think about Jerry? How does he work in this situation? You can have that in a golf game, right? You can have that. No AI is going to do that. No process automation is going to do that. So that relationship piece in sales and marketing is never going away. We&#8217;re not going to automate it, but we can change the manufacturing line a little bit of things that we know we have.</p><p>a specific outcome. &#8275; If you have to do an ad creation and has to go a certain way and has to get put over here at this certain time, and then we&#8217;re going to track the metrics to see if that ad actually worked, sure, you can automate that.</p><p>But you can&#8217;t automate the process of the total &#8275; understanding of what the insights in those numbers look like.</p><p>John Sumser (37:05)</p><p>Yep. Well, I think we could probably keep talking for another couple of days. &#8275; This has been great.</p><p>Jonathan Duarte (37:12)</p><p>Well, we probably will be.</p><p>John Sumser (37:14)</p><p>Tell people how to your name, how to get in touch with you, and a little bit about your company.</p><p>Jonathan Duarte (37:21)</p><p>Yeah, so my name&#8217;s Jonathan Duarte, D-U-A-R-T-E. Easy to find on LinkedIn because I&#8217;m an old SEO guy. So my LinkedIn profile is best LinkedIn profile. Pretty easy to find. &#8275; I run a company called GoHire. We&#8217;ve been doing &#8275; talent and recruiting automation &#8275; from text messaging to &#8275; any kind of strange HR platforms. &#8275;</p><p>that you think, hey, there&#8217;s a way to automate. We do a lot of custom builds in there. And then I also do a lot of advisory, some private equity, some VCs, as well as corporations on overall what the process should be and what kind of tools might be able to solve those types of solutions. I also do some early stage investing and consulting for early stage companies in the HR tech space.</p><p>John Sumser (38:16)</p><p>It&#8217;s been a great conversation, Jonathan. I really appreciate taking the time to stop by and do this. Thanks everybody. Yep, and we&#8217;ll see you all next time through on the HR Examiner podcast.</p><p>Jonathan Duarte (38:22)</p><p>you bet, John. Thank you, as always.</p><p>John Sumser (38:30)</p><p>Okay,</p><p><strong>Keywords</strong></p><p>chatbots, recruitment, AI, HR technology, data quality, SaaS, Taylorism, conversational interfaces, security risks, automation</p>]]></content:encoded></item><item><title><![CDATA[HREX v 1.07 George Larocque]]></title><description><![CDATA[Following the Money in the HRTech/WorkTech Industry]]></description><link>https://www.hrexaminer.com/p/hrex-v-107-george-larocque</link><guid isPermaLink="false">https://www.hrexaminer.com/p/hrex-v-107-george-larocque</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Wed, 04 Feb 2026 12:48:21 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/186785165/80b31c7cbc2ecc17a5a86d7fc7cd5e33.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In this conversation, John Sumser and George LaRocque explore the evolving landscape of HR technology, market insights, and the impact of economic changes on recruiting and workforce dynamics. They discuss the importance of data in shaping future HR solutions, the value of education in today&#8217;s job market, and the challenges posed by financial engineering in startups. The conversation also touches on the rapid advancements in AI and its implications for the workforce, as well as investment trends and predictions for the future of HR.</p><p><strong>Takeaways</strong></p><ul><li><p>The HR tech market has seen significant changes over the years.</p></li><li><p>Understanding market sizing and capital flow is crucial for success.</p></li><li><p>Recruiting lacks accountability, impacting its effectiveness.</p></li><li><p>Economic changes are reshaping the landscape of HR.</p></li><li><p>The value of education is being re-evaluated in today&#8217;s job market.</p></li><li><p>AI is transforming workforce dynamics, but many are unprepared.</p></li><li><p>Investment trends indicate a shift towards larger deals in HR tech.</p></li><li><p>Military discipline can inform business practices and decision-making.</p></li><li><p>Financial engineering poses challenges for startups in the HR space.</p></li><li><p>Data will be a key driver in shaping future HR solutions.</p></li></ul><p><strong>Titles</strong></p><ul><li><p>Navigating the Future of HR Tech</p></li><li><p>The Intersection of Data and HR</p></li></ul><p><strong>Sound Bites</strong></p><ul><li><p>&#8220;Whoever has the data wins&#8221;</p></li><li><p>&#8220;We&#8217;re not in the days of 2020, 2021&#8221;</p></li><li><p>&#8220;It&#8217;s financial engineering&#8221;</p></li></ul><p><strong>Chapters</strong></p><p><strong>00:00</strong>The Evolution of HR Tech and Market Insights</p><p><strong>04:50</strong>Understanding Market Sizing and Capital Flow</p><p><strong>08:50</strong>The Challenges of Recruiting and Accountability</p><p><strong>12:54</strong>Navigating Economic Changes in HR Tech</p><p><strong>17:05</strong>The Value of Education in Today&#8217;s Job Market</p><p><strong>21:18</strong>The Impact of Military Experience on Technology Perspectives</p><p><strong>25:18</strong>AI&#8217;s Role in the Future of Work and Investment Trends</p><p><strong>Keywords</strong>HR Tech, Market Insights, Recruiting, Economic Changes, Education, AI, Investment Trends, Business Discipline, Financial Engineering, Data Solutions</p>]]></content:encoded></item><item><title><![CDATA[A Guide to Key Thinkers in AI Security]]></title><description><![CDATA[What You Don't Know Can Hurt You]]></description><link>https://www.hrexaminer.com/p/a-guide-to-key-thinkers-in-ai-security</link><guid isPermaLink="false">https://www.hrexaminer.com/p/a-guide-to-key-thinkers-in-ai-security</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Sat, 31 Jan 2026 00:18:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JFxb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c7fab5-089c-4756-97dc-1391fb3fd367_5999x3999.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JFxb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c7fab5-089c-4756-97dc-1391fb3fd367_5999x3999.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JFxb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c7fab5-089c-4756-97dc-1391fb3fd367_5999x3999.heic 424w, https://substackcdn.com/image/fetch/$s_!JFxb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c7fab5-089c-4756-97dc-1391fb3fd367_5999x3999.heic 848w, https://substackcdn.com/image/fetch/$s_!JFxb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c7fab5-089c-4756-97dc-1391fb3fd367_5999x3999.heic 1272w, https://substackcdn.com/image/fetch/$s_!JFxb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c7fab5-089c-4756-97dc-1391fb3fd367_5999x3999.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JFxb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c7fab5-089c-4756-97dc-1391fb3fd367_5999x3999.heic" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a1c7fab5-089c-4756-97dc-1391fb3fd367_5999x3999.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:400976,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hrexaminer.com/i/186135012?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c7fab5-089c-4756-97dc-1391fb3fd367_5999x3999.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JFxb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c7fab5-089c-4756-97dc-1391fb3fd367_5999x3999.heic 424w, https://substackcdn.com/image/fetch/$s_!JFxb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c7fab5-089c-4756-97dc-1391fb3fd367_5999x3999.heic 848w, https://substackcdn.com/image/fetch/$s_!JFxb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c7fab5-089c-4756-97dc-1391fb3fd367_5999x3999.heic 1272w, https://substackcdn.com/image/fetch/$s_!JFxb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c7fab5-089c-4756-97dc-1391fb3fd367_5999x3999.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>(also see <a href="https://www.hrexaminer.com/publish/post/186142062">60 Ways to Hack Your Chatbot</a>)</p><p>Nowadays, machines have opinions. And some people have opinions about how to break them.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>If you&#8217;re trying to understand the emerging field of AI security, and you should be,  these systems are making decisions about your employees, your candidates, and your customers. Here&#8217;s who&#8217;s doing the serious thinking. Not the vendor pitches. Not the breathless LinkedIn posts. The actual work.</p><p><strong>Simon Willison</strong> is probably the closest thing this field has to a philosopher-practitioner. He co-created Django back when web frameworks were the frontier, and now he&#8217;s turned his attention to what happens when language models meet the real world. He coined &#8220;prompt injection&#8221; in 2022, which is a bit like naming a disease. Once you can name it, you start to see it everywhere.</p><p>His concept of the &#8220;lethal trifecta&#8221; should be tattooed on the forehead of every HR technology buyer: private data, untrusted content, and external communication. Put all three together and you&#8217;ve built a system that can be manipulated into doing things you never intended. Sound familiar? It should. That&#8217;s every AI-powered recruiting tool, every employee chatbot, every &#8220;intelligent&#8221; workflow.</p><p>Willison synthesizes academic research into something practitioners can actually use. That&#8217;s rare and valuable.</p><ul><li><p>https://simonwillison.net</p></li><li><p><a href="https://simonwillison.net/tags/prompt-injection/">https://simonwillison.net/tags/prompt-injection/</a></p></li></ul><p><strong>Johann Rehberger</strong> breaks things for a living. Director of Red Team at Electronic Arts, formerly building offensive security teams at Microsoft and Uber. In August 2025, he published a vulnerability report every single day&#8230;ChatGPT, Claude Code, GitHub Copilot, Cursor, Devin. One after another after another.</p><p>The message isn&#8217;t subtle: <em>everything is vulnerable</em>. The question isn&#8217;t whether your AI system can be manipulated. The question is whether anyone&#8217;s bothered to try yet.</p><p>His blog is called &#8220;Embrace the Red.&#8221; That tells you something about his worldview.</p><ul><li><p>https://embracethered.com</p></li></ul><p><strong>Nicholas Carlini</strong> works at Anthropic now, after a stint at Google DeepMind. He&#8217;s one of the authors of &#8220;<a href="https://arxiv.org/abs/2307.15043">Universal and Transferable Adversarial Attacks on Aligned Language Models</a>,&#8221; which is academic-speak for &#8220;we found ways to break these things that work across different systems.&#8221;</p><p>What makes Carlini interesting is that he understands the fundamental flaws in these systems better than almost anyone. He still finds them useful. That&#8217;s intellectual honesty. <em>The technology is broken in important ways AND it&#8217;s valuable. Both things are true.</em> Most people can only hold one of those ideas at a time.</p><ul><li><p>https://nicholas.carlini.com</p></li></ul><p><strong>Kai Greshake</strong> gave us the &#8220;Inject My PDF&#8221; attack&#8230;hiding instructions in resumes that fool AI screening systems into seeing whatever you want them to see. It&#8217;s clever. It&#8217;s also terrifying if you&#8217;re using AI to make hiring decisions.</p><p>Think about that for a moment. <em>The candidate controls what the machine &#8220;sees.&#8221;</em> And you&#8217;re trusting the machine&#8217;s opinion.</p><ul><li><p>Research paper: <a href="https://arxiv.org/abs/2302.12173">https://arxiv.org/abs/2302.12173</a></p></li></ul><p><strong>Bruce Schneier</strong> has been writing about security since before most AI researchers were born. His blog has run since 2004. His newsletter since 1998. When he turns his attention to AI,it&#8217;s worth paying attention.</p><p>His 2025 book with Nathan Sanders, <em>Rewiring Democracy</em>, looks at AI&#8217;s impact on governance and politics. But his security writing gets at something deeper: the relationship between trust and technology. <em>AI systems ask us to trust them in ways we don&#8217;t fully understand.</em> Schneier&#8217;s been thinking about that problem longer than anyone.</p><ul><li><p>https://www.schneier.com</p></li></ul><p><strong>Daniel Miessler</strong> runs the &#8220;Unsupervised Learning&#8221; newsletter and thinks carefully about the attack/defense balance in AI systems. Schneier cited his SPQA architecture work, which is a framework for thinking about how AI systems process information and where they&#8217;re vulnerable.</p><p>Practitioners read Miessler because he&#8217;s practical without being shallow.</p><ul><li><p>https://danielmiessler.com</p></li><li><p>https://newsletter.danielmiessler.com</p></li></ul><h2>The Framework Builders</h2><p><strong>Steve Wilson</strong> leads the OWASP GenAI Security Project and founded the OWASP Top 10 for LLM Applications. If you haven&#8217;t read the OWASP LLM Top 10, stop reading this and go read that first. It&#8217;s the closest thing we have to a shared vocabulary for AI security risks.</p><p>The 2025 version reflects two years of learning. The new Agentic Top 10 addresses what happens when AI systems can take actions autonomously. That is exactly where HR technology is heading.</p><ul><li><p>https://genai.owasp.org</p></li><li><p><a href="https://genai.owasp.org/llm-top-10/">https://genai.owasp.org/llm-top-10/</a></p></li></ul><h2>The Academic Foundation</h2><p>If you want to understand where the serious research is happening, start with these papers:</p><p>&#8220;Universal and Transferable Adversarial Attacks on Aligned Language Models&#8221; &#8212; The attacks that work on one system often work on others. That&#8217;s not a bug. It&#8217;s a feature of how these systems are built.</p><ul><li><p><a href="https://arxiv.org/abs/2307.15043">https://arxiv.org/abs/2307.15043</a></p></li></ul><p>&#8220;Not what you&#8217;ve signed up for&#8221; &#8212; Greshake and colleagues on indirect prompt injection. The attack comes through the data the system processes, not through the user interface. Your AI reads a poisoned document and starts following the attacker&#8217;s instructions.</p><ul><li><p><a href="https://arxiv.org/abs/2302.12173">https://arxiv.org/abs/2302.12173</a></p></li></ul><p>&#8220;Systems Security Foundations for Agentic Computing&#8221; &#8212; Rehberger and colleagues on what it means to secure AI agents. Spoiler: it&#8217;s harder than securing traditional software.</p><ul><li><p><a href="https://arxiv.org/abs/2512.01295">https://arxiv.org/abs/2512.01295</a></p></li></ul><div><hr></div><h2>The Vendor Research Worth Reading</h2><p>Most vendor blogs are marketing dressed up as insight. A few are doing actual work:</p><p><strong>Lakera</strong> &#8212; Prompt injection defense. Dropbox uses them. That means they&#8217;ve been tested at scale.</p><ul><li><p><a href="https://www.lakera.ai/blog">https://www.lakera.ai/blog</a></p></li></ul><p><strong>HiddenLayer</strong> &#8212; Model security and red teaming. They&#8217;re thinking about the threats most vendors pretend don&#8217;t exist.</p><ul><li><p><a href="https://hiddenlayer.com/research/">https://hiddenlayer.com/research/</a></p></li></ul><p><strong>Mindgard</strong> &#8212; They maintain a useful list of who&#8217;s who in AI security, which is how I know they&#8217;re paying attention to the right people.</p><ul><li><p><a href="https://mindgard.ai/blog">https://mindgard.ai/blog</a></p></li></ul><p><strong>Adversa AI</strong> &#8212; Weekly roundups of AI security developments. Good for staying current without drowning.</p><ul><li><p><a href="https://adversa.ai/blog">https://adversa.ai/blog</a></p></li></ul><div><hr></div><h2>Resource Collections</h2><p><strong>NetsecExplained</strong> on GitHub &#8212; A curated collection that&#8217;s actually curated, not just aggregated.</p><ul><li><p><a href="https://github.com/NetsecExplained/Attacking-and-Defending-Generative-AI">https://github.com/NetsecExplained/Attacking-and-Defending-Generative-AI</a></p></li></ul><p><strong>MITRE ATLAS</strong> &#8212; The adversarial threat landscape for AI systems. MITRE knows how to build taxonomies that practitioners can use.</p><ul><li><p>https://atlas.mitre.org</p></li></ul><p><strong>NIST AI Risk Management Framework</strong> &#8212; Government rigor applied to AI. It&#8217;s slower than the vendor frameworks but more likely to be right.</p><ul><li><p><a href="https://www.nist.gov/itl/ai-risk-management-framework">https://www.nist.gov/itl/ai-risk-management-framework</a></p></li></ul><div><hr></div><h2>Three Things to Remember</h2><p>First: The people doing the best work on AI security are often the same people who understand why the technology is valuable. That&#8217;s not a contradiction. <strong>You can&#8217;t protect what you don&#8217;t understand.</strong></p><p>Second: The vulnerability reports keep coming. Every month, every week, someone finds another way to make these systems do things they shouldn&#8217;t. The technology is moving faster than our ability to secure it. That&#8217;s the reality.</p><p>Third: &#8220;I don&#8217;t know&#8221; is still the most honest answer to most AI security questions. We&#8217;re early. The people worth reading are the ones who admit what they don&#8217;t know while working to figure it out.</p><p>The machines will keep having opinions. The question is whether we&#8217;re paying attention to the people who can help us understand when those opinions are wrong.</p><p>=======================<br>Photo by <a href="https://unsplash.com/@sebastiaanstam?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">sebastiaan stam</a> on <a href="https://unsplash.com/photos/man-wearing-red-hoodie-RChZT-JlI9g?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[60 Ways to Hack Your Chatbot]]></title><description><![CDATA[Introducing the HRExaminer AI Security Field Guide (for HRTech/WorkTech)]]></description><link>https://www.hrexaminer.com/p/60-ways-to-hack-your-chatbot</link><guid isPermaLink="false">https://www.hrexaminer.com/p/60-ways-to-hack-your-chatbot</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Thu, 29 Jan 2026 12:16:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Ca6J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39e3c1e6-8ce8-4032-8d77-c2dc3031fd82_3840x2160.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ca6J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39e3c1e6-8ce8-4032-8d77-c2dc3031fd82_3840x2160.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ca6J!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39e3c1e6-8ce8-4032-8d77-c2dc3031fd82_3840x2160.heic 424w, https://substackcdn.com/image/fetch/$s_!Ca6J!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39e3c1e6-8ce8-4032-8d77-c2dc3031fd82_3840x2160.heic 848w, https://substackcdn.com/image/fetch/$s_!Ca6J!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39e3c1e6-8ce8-4032-8d77-c2dc3031fd82_3840x2160.heic 1272w, https://substackcdn.com/image/fetch/$s_!Ca6J!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39e3c1e6-8ce8-4032-8d77-c2dc3031fd82_3840x2160.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ca6J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39e3c1e6-8ce8-4032-8d77-c2dc3031fd82_3840x2160.heic" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/39e3c1e6-8ce8-4032-8d77-c2dc3031fd82_3840x2160.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:204628,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hrexaminer.com/i/186142062?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39e3c1e6-8ce8-4032-8d77-c2dc3031fd82_3840x2160.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ca6J!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39e3c1e6-8ce8-4032-8d77-c2dc3031fd82_3840x2160.heic 424w, https://substackcdn.com/image/fetch/$s_!Ca6J!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39e3c1e6-8ce8-4032-8d77-c2dc3031fd82_3840x2160.heic 848w, https://substackcdn.com/image/fetch/$s_!Ca6J!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39e3c1e6-8ce8-4032-8d77-c2dc3031fd82_3840x2160.heic 1272w, https://substackcdn.com/image/fetch/$s_!Ca6J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39e3c1e6-8ce8-4032-8d77-c2dc3031fd82_3840x2160.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6><em>You just slip out the back, Jack<br>Make a new plan, Stan<br>You don&#8217;t need to be coy, Roy<br>Just get yourself free<br>Hop on the bus, Gus<br>You don&#8217;t need to discuss much<br>Just drop off the key, Lee<br>And get yourself free</em></h6><h6><br>=&gt;Paul Simon &#8211; <a href="https://www.youtube.com/watch?v=E8JXiroAi6Y">50 Ways to Leave Your Lover</a></h6><p>Photo by <a href="https://unsplash.com/@growtika?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Growtika</a> on <a href="https://unsplash.com/photos/a-skeleton-sitting-at-a-desk-with-a-laptop-and-keyboard-9TFc_FRHkeA?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p><p>California&#8217;s economy, particularly the San Francisco Bay area, has boom and bust cycles. We are in one now. It&#8217;s been that way since the Goldrush (1848 to 1890). People here understand that whatever goes up must come down. We love economic bubbles (at least until they pop).</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>In the early going, safety and security are rarely central concerns. The promise of &#8216;gold in them thar hills&#8217; is alluring enough to throw caution to the wind. The question is always &#8216;how to dig the gold mine quickly.&#8217; It&#8217;s never &#8216;how to mine the gold safely.&#8217;</p><p>Inexperienced gold hunters ignored (or didn&#8217;t understand) the risks. The name of the game was wealth acquisition, not safety or reliability.</p><p>The hazards included</p><ul><li><p>Human-Caused: Anti-Chinese riots and violence over claim jumping.</p></li><li><p>Structural: Abandoned mines, unstable tunnels, and hidden dynamite</p></li><li><p>Disease: Crowded, unsanitary mining camps were breeding grounds for cholera, typhoid fever, and dysentery.</p></li><li><p>Environmental: Avalanches, freezing temperatures, and river crossings.</p></li></ul><p>90% to 95% of miners did not become wealthy. Less than 1% became rich. <strong>The real money makers in the gold rush were the providers of tools and supplies (and the bars and prostitutes)</strong>.</p><p>Does this sound familiar?</p><p>In the <strong>AI gold rush</strong>, there is a growing problem. Security and safety are given very limited attention, just like in the gold rush. As the bubble intensifies, the risks multiply.<br><br>This time last year, I compiled a list of ways to hack an AI tool or system. There were about 20. Today, that number is 60. It&#8217;s will be over one hundred by mid-year.</p><p>It&#8217;s the wild, wild west. It&#8217;s completely understandable that &#8216;getting things done&#8217; comes before making things secure and safe. But, today&#8217;s gold rush has a compressed time line. In the original, the safety problems were all held by the miners. Risk and reward were intricately tied together.</p><p>Today&#8217;s mines are the wallets, data, and organizations of clients. AI companies (which are increasingly easy to start) have investors who demand rapid payback. This forces the &#8216;miners&#8217; to sell before the technology is safe. While the legal system seems to be moving towards increased vendor liability, the current risk is exclusively born by customers.</p><p><a href="https://docs.google.com/document/d/1B9D3RbglhNBpPIueHRCnoLUQp3ZHqv62gq49fRD5c3I/edit?usp=sharing">We have built a guide to the problem and their solutions</a>.</p><p>The guide contains</p><p>&#183; a list of 60 ways an AI system or tool can be hacked,</p><p>&#183; a list of companies that are looking at subsets of the security problem,</p><p>&#183; questions you should be asking vendors.</p><p>It closes with 3 things to do when you&#8217;ve finished reading.</p><p>There are more questions but these are the critical ones.</p><p>Each element of this guide is written to be easily understood. It&#8217;s still a lot. That&#8217;s one of the key problems of our era: The volume of data is exploding while our ability to decode and understand it isn&#8217;t. That&#8217;s why this feels so overwhelming.</p><p>Rest assured that this guide will be outdated as soon as it hits the streets. It&#8217;s a bookmark in a time of exploding possibility. Here&#8217;s the link. Feel free to copy and share. Please keep the attributions to HRExaminer.</p><p>It&#8217;s really hard to keep all of this stuff managed and communicated. The guide should help. You should hold your vendor responsible. They are in a better position to understand and manage.</p><p><a href="https://docs.google.com/document/d/1B9D3RbglhNBpPIueHRCnoLUQp3ZHqv62gq49fRD5c3I/edit?usp=sharing">The HRExaminer AI Security Field Guide (for HRTech/WorkTech)</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/p/60-ways-to-hack-your-chatbot?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/p/60-ways-to-hack-your-chatbot?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p><p>=========================================<br><strong>Appendix</strong></p><p>Here are the 60 ways to hack your chatbot. See <a href="https://docs.google.com/document/d/1B9D3RbglhNBpPIueHRCnoLUQp3ZHqv62gq49fRD5c3I/edit?usp=sharing">the guide</a> for a deeper look, explanations, sources, and questions to ask vendors..</p><p><strong>AI-Specific Attack Vectors (Non-Traditional)</strong></p><p><strong>1. Prompt-Level Attacks (Inference-Time Manipulation)</strong></p><p><strong>1.1 Prompt Injection (Direct)</strong></p><p>&#183; Overriding system instructions via user input<br>&#183; Instruction hierarchy collapse<br>&#183; Role confusion attacks (&#8220;ignore previous instructions&#8221;)</p><p><strong>1.2 Indirect Prompt Injection</strong></p><p>&#183; Malicious instructions embedded in:</p><blockquote><p>o Web pages<br>o PDFs<br>o Emails<br>o Knowledge base documents</p></blockquote><p>&#183; Triggered during RAG ingestion or tool calls</p><p><strong>1.3 Multi-Step Prompt Injection</strong></p><p>&#183; Harmless-looking initial prompts that prime later exploitation<br>&#183; Instruction staging across turns<br>&#183; &#8220;Latent payload&#8221; prompts</p><p><strong>1.4 Context Boundary Attacks</strong></p><p>&#183; Overflowing context windows to push out safety instructions<br>&#183; Token flooding<br>&#183; Long-document smuggling</p><p><strong>1.5 Prompt Steganography</strong></p><p>&#183; Instructions hidden via:</p><blockquote><p>o Whitespace<br>o Unicode<br>o Emojis<br>o HTML/Markdown tricks<br>o Base64 / encoding layers</p></blockquote><p><strong>2. Output Manipulation &amp; Misalignment Attacks</strong></p><p><strong>2.1 Output Steering</strong></p><p>&#183; Forcing biased, false, or malicious conclusions<br>&#183; Manipulating tone, framing, or certainty</p><p><strong>2.2 Instruction Reinterpretation</strong></p><p>&#183; Exploiting ambiguous prompts<br>&#183; Semantic drift across turns</p><p><strong>2.3 Policy Bypass via Reframing</strong></p><p>&#183; Asking disallowed questions indirectly<br>&#183; Hypotheticals, roleplay, meta-analysis</p><p><strong>2.4 Over-Compliance Exploits</strong></p><p>&#183; Leveraging helpfulness bias<br>&#183; Exploiting &#8220;explain why this is wrong&#8221; patterns</p><p><strong>3. Tool-Calling &amp; Function-Calling Attacks</strong></p><p><strong>3.1 Tool Injection</strong></p><p>&#183; Forcing unintended tool invocation<br>&#183; Triggering tools with malicious parameters</p><p><strong>3.2 Argument Smuggling</strong></p><p>&#183; Embedding instructions inside tool arguments<br>&#183; JSON field manipulation</p><p><strong>3.3 Tool Confusion Attacks</strong></p><p>&#183; Exploiting poorly named or overlapping tools<br>&#183; Forcing wrong tool selection</p><p><strong>3.4 Tool Chain Hijacking</strong></p><p>&#183; Manipulating output of Tool A to compromise Tool B<br>&#183; Cross-tool prompt contamination</p><p><strong>3.5 Unsafe Tool Autonomy</strong></p><p>&#183; Agents acting without human confirmation<br>&#183; Recursive or runaway tool usage</p><p><strong>4. Agent-Specific Attacks</strong></p><p><strong>4.1 Goal Hijacking</strong></p><p>&#183; Rewriting agent objectives mid-task<br>&#183; Conflicting goals across agent memory</p><p><strong>4.2 Agent Memory Poisoning</strong></p><p>&#183; Inserting false facts into:</p><blockquote><p>o Short-term memory<br>o Long-term memory<br>o Vector memory</p></blockquote><p>&#183; Persistence across sessions</p><p><strong>4.3 Planning Manipulation</strong></p><p>&#183; Corrupting chain-of-thought or plans<br>&#183; Forcing suboptimal or dangerous steps</p><p><strong>4.4 Self-Modification Exploits</strong></p><p>&#183; Agents editing:</p><blockquote><p>o Own prompts<br>o Own policies<br>o Own routing logic</p></blockquote><p><strong>4.5 Delegation Attacks</strong></p><p>&#183; Exploiting agent-to-agent trust<br>&#183; Compromising downstream agents</p><p><strong>5. Agent Orchestration &amp; Workflow Attacks</strong></p><p><strong>5.1 Workflow Injection</strong></p><p>&#183; Altering task graphs<br>&#183; Skipping validation steps</p><p><strong>5.2 Role Boundary Collapse</strong></p><p>&#183; Agents acting outside assigned authority<br>&#183; Planner vs executor confusion</p><p><strong>5.3 Recursive Loop Attacks</strong></p><p>&#183; Infinite planning or execution loops<br>&#183; Cost-amplification denial</p><p><strong>5.4 Orchestrator Blind Spots</strong></p><p>&#183; Attacks occurring between steps<br>&#183; Cross-agent contamination undetected</p><p><strong>5.5 Event Ordering Manipulation</strong></p><p>&#183; Race conditions in agent workflows<br>&#183; Exploiting async execution</p><p><strong>6. MCP (Model Context Protocol)&#8211;Specific Attacks</strong></p><p><strong>6.1 Malicious MCP Servers</strong></p><p>&#183; Supplying poisoned context<br>&#183; Returning crafted responses to steer model behavior</p><p><strong>6.2 Context Overreach</strong></p><p>&#183; MCP returning more data than requested<br>&#183; Hidden instruction injection</p><p><strong>6.3 MCP Tool Abuse</strong></p><p>&#183; Tools that expose sensitive system state<br>&#183; Excessive privileges</p><p><strong>6.4 Trust Spoofing</strong></p><p>&#183; Impersonating trusted MCP endpoints<br>&#183; Confused deputy scenarios</p><p><strong>6.5 MCP Chaining Attacks</strong></p><p>&#183; One MCP poisoning another MCP&#8217;s inputs</p><p><strong>7. RAG (Retrieval-Augmented Generation) Attacks</strong></p><p><strong>7.1 Knowledge Base Poisoning</strong></p><p>&#183; Inserting malicious documents<br>&#183; Editing authoritative sources</p><p><strong>7.2 Embedding Manipulation</strong></p><p>&#183; Semantic collisions<br>&#183; Vector space crowding</p><p><strong>7.3 Retrieval Bias Attacks</strong></p><p>&#183; Forcing retrieval of low-quality or malicious chunks<br>&#183; Query manipulation</p><p><strong>7.4 Citation Spoofing</strong></p><p>&#183; Fake citations<br>&#183; Source hallucination amplification</p><p><strong>7.5 Contextual Override</strong></p><p>&#183; Retrieved documents overriding system policies</p><p><strong>8. Training Data Attacks</strong></p><p><strong>8.1 Data Poisoning (Pre-Training)</strong></p><p>&#183; Injecting false correlations<br>&#183; Backdoor triggers</p><p><strong>8.2 Fine-Tuning Poisoning</strong></p><p>&#183; Subtle bias insertion<br>&#183; Conditional behaviors (&#8220;if X then misbehave&#8221;)</p><p><strong>8.3 Preference Model Poisoning</strong></p><p>&#183; Manipulating RLHF signals<br>&#183; Shaping unsafe alignment incentives</p><p><strong>8.4 Synthetic Data Feedback Loops</strong></p><p>&#183; Model trained on its own outputs<br>&#183; Error amplification</p><p><strong>8.5 Label Manipulation</strong></p><p>&#183; Corrupting supervised signals<br>&#183; Misclassification reinforcement</p><p><strong>9. Model Behavior &amp; Representation Attacks</strong></p><p><strong>9.1 Backdoor Triggers</strong></p><p>&#183; Rare tokens or phrases causing hidden behaviors</p><p><strong>9.2 Trojaned Models</strong></p><p>&#183; Pre-compromised open-source models<br>&#183; Malicious adapters or LoRA layers</p><p><strong>9.3 Model Collapse Attacks</strong></p><p>&#183; Degrading output quality intentionally<br>&#183; Over-homogenization</p><p><strong>9.4 Representation Inversion</strong></p><p>&#183; Extracting sensitive training data<br>&#183; Memorization exploitation.</p><p><strong>9.5 Weight-Space Manipulation</strong></p><p>&#183; Poisoned checkpoints<br>&#183; Malicious merges</p><p><strong>10. API-Level AI Attacks (Non-Traditional)</strong></p><p><strong>10.1 Prompt Leakage via APIs</strong></p><p>&#183; System prompts exposed through error messages<br>&#183; Debug endpoints</p><p><strong>10.2 Output Side-Channel Attacks</strong></p><p>&#183; Timing<br>&#183; Token counts<br>&#183; Cost signals</p><p><strong>10.3 Rate-Limit Shaping Attacks</strong></p><p>&#183; Forcing degraded reasoning<br>&#183; Truncation exploitation</p><p><strong>10.4 Schema Abuse</strong></p><p>&#183; Exploiting weak input/output validation<br>&#183; Overly permissive JSON schemas</p><p><strong>10.5 Model Switching Exploits</strong></p><p>&#183; Forcing fallback to weaker models</p><p><strong>11. Data Integrity &amp; Lifecycle Attacks</strong></p><p><strong>11.1 Data Drift Exploitation</strong></p><p>&#183; Slowly shifting inputs to degrade performance</p><p><strong>11.2 Feedback Poisoning</strong></p><p>&#183; Manipulating user ratings<br>&#183; Corrupting evaluation pipelines</p><p><strong>11.3 Logging Contamination</strong></p><p>&#183; Injecting instructions into logs reused for training</p><p><strong>11.4 Ground Truth Erosion</strong></p><p>&#183; Undermining reference datasets<br>&#183; Authority decay</p><p><strong>11.5 Evaluation Gaming</strong></p><p>&#183; Passing benchmarks while failing real-world safety</p><p><strong>12. Governance, Control &amp; Oversight Failures (AI-Native)</strong></p><p><strong>12.1 Alignment Drift</strong></p><p>&#183; Gradual deviation from intended behavior</p><p><strong>12.2 Policy Shadowing</strong></p><p>&#183; Hidden instructions overriding official policies</p><p><strong>12.3 Human-in-the-Loop Bypass</strong></p><p>&#183; Agent designs that avoid escalation</p><p><strong>12.4 Audit Evasion</strong></p><p>&#183; Non-reproducible outputs<br>&#183; Non-deterministic behavior masking issues</p><p><strong>12.5 Explainability Attacks</strong></p><p>&#183; Plausible but false rationales</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading HRExaminer! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Why the Eightfold lawsuit matters and doesn't]]></title><description><![CDATA[by Heather Bussing]]></description><link>https://www.hrexaminer.com/p/why-the-eightfold-lawsuit-matters</link><guid isPermaLink="false">https://www.hrexaminer.com/p/why-the-eightfold-lawsuit-matters</guid><dc:creator><![CDATA[Heather Bussing]]></dc:creator><pubDate>Thu, 22 Jan 2026 21:50:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!DoN0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d79cda-d4ca-44c7-9cc6-33649c94d6bf_3984x2656.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DoN0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d79cda-d4ca-44c7-9cc6-33649c94d6bf_3984x2656.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DoN0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d79cda-d4ca-44c7-9cc6-33649c94d6bf_3984x2656.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DoN0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d79cda-d4ca-44c7-9cc6-33649c94d6bf_3984x2656.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DoN0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d79cda-d4ca-44c7-9cc6-33649c94d6bf_3984x2656.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DoN0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d79cda-d4ca-44c7-9cc6-33649c94d6bf_3984x2656.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DoN0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d79cda-d4ca-44c7-9cc6-33649c94d6bf_3984x2656.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/00d79cda-d4ca-44c7-9cc6-33649c94d6bf_3984x2656.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6833830,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hrexaminer.com/i/185461104?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d79cda-d4ca-44c7-9cc6-33649c94d6bf_3984x2656.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DoN0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d79cda-d4ca-44c7-9cc6-33649c94d6bf_3984x2656.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DoN0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d79cda-d4ca-44c7-9cc6-33649c94d6bf_3984x2656.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DoN0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d79cda-d4ca-44c7-9cc6-33649c94d6bf_3984x2656.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DoN0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d79cda-d4ca-44c7-9cc6-33649c94d6bf_3984x2656.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>I&#8217;ve been waiting for this. It was just a matter of time before some attorney wondered whether gathering lots of data on people and using it for important decisions like hiring violates the Fair Credit Reporting Act. </p><p>This week some job candidates sued Eightfold AI for collecting data about them without telling them, without allowing them to know what&#8217;s being collected, and without giving them an opportunity to correct any wrong information.</p><h4>What does the FCRA cover?</h4><p>The FCRA applies not just to traditional credit reports, but also to pretty much all kinds of data collection about a person for employment decisions because the data collection falls under the definition of &#8220;consumer report.&#8221; </p><p>A consumer report is:</p><blockquote><p>&#8220;any written, oral, or other communication of any information by a consumer reporting agency bearing on a consumer&#8217;s credit worthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living which is used or expected to be used or collected in whole or in part for the purpose of serving as a factor in establishing the consumer&#8217;s eligibility for</p><p>(A) credit or insurance to be used primarily for personal, family, or household purposes;</p><p>(B) employment purposes; or</p><p>(C) any other purpose authorized under section 604 [&#167; 1681b].&#8221;</p></blockquote><p>A consumer reporting agency is:</p><blockquote><p>&#8220;any person which, for monetary fees, dues, or on a cooperative nonprofit basis, regularly engages in whole or in part in the practice of assembling or evaluating consumer credit information or other information on consumers for the purpose of furnishing consumer reports to third parties, and which uses any means or facility of interstate commerce for the purpose of preparing or furnishing consumer reports.&#8221;</p></blockquote><p>Employment purposes is:</p><blockquote><p>&#8220;a report used for the purpose of evaluating a consumer for employment, promotion, reassignment or retention as an employee.&#8221;</p></blockquote><p>Basically, the FCRA applies any time somebody gathers data to be used in evaluating a candidate or employee for any employment decision and provides that data to an employer who pays them for it.</p><p>It requires notice that the data is being collected and provided to the employer and the person has to consent to the use of that information in the employment decision. </p><p>This seems big until you realize that there is absolutely nothing in these laws that says that employers can&#8217;t rely on whatever data they get to make decisions. Even if they decide based on incomplete or incorrect information, they don&#8217;t have to change the decision. They just have to give the person the opportunity to correct the information, which won&#8217;t matter at that point anyway because the person already didn&#8217;t get the job. And even if the person asks for correction, there&#8217;s no incentive for anyone to actually do anything because the decision is over and the employer, who has the obligation, doesn&#8217;t actually control the data.</p><h4>How does the CCPA apply here?</h4><p>Enter the CCPA, the California Consumer Protection Act, that also provides people with rights to know what data is being collected about them as well as how it is used and shared. The CCPA also provides people with rights to correct or have information about them deleted. But except in limited circumstances&#8212;primarily at the point the data is collected directly from someone&#8212;there is no obligation for the entities collecting the data to disclose that they have it. The obligations are primarily on the consumer to ask, even though they have no way to know if it&#8217;s worth asking. Then the burden is all on the consumer to try request correction or deletion.</p><p>From a candidate or consumer standpoint, these laws are mostly data privacy theater because they don&#8217;t prevent the bad things from happening. They just offer &#8220;rights&#8221; without remedies that actually do anything that matters.</p><h4>What are the Plaintiffs asking for in this lawsuit?</h4><p>The FCRA allows plaintiffs to collect damages for violations of the act. Statutory damages range from $100-$1000 per violation. To the extent that a job candidate can prove actual damages as a result of the violation, a plaintiff could also recover those along with potential punitive damages.</p><p>Similarly, the CCPA provides for statutory fines of up to $2500 per violation for unintentional violations and$7,500 per violation for intentional (knowing) violations. But consumers can only recover up to $750 per violation.</p><p>The problem is that proving causation, that you did not get a job as a result of an FCRA or a CCPA violation, is almost impossible. If only you had received notice and an opportunity to correct information after the decision was made, then surely you would have been hired in the first place? Yeah, no. Candidate lawsuits are difficult to prove because there are a million perfectly reasonable reasons a candidate may not be selected, including that there were a bunch of other candidates that were closer to what the employer was looking for.</p><p>So this is really about the class action and making the class as big as possible then piling on the statutory damages and attorneys&#8217; fees.</p><p>Like most class actions, it&#8217;s about making the problem known, trying to influence how tech companies and employers operate, and collecting attorneys&#8217; fees for the trouble.</p><p>But that&#8217;s also what civil litigation does. It moves money around after the bad things happen. The laws exist to try to prevent the bad things from happening by providing a deterrent for violating the requirements. But until there&#8217;s an actual lawsuit where someone may have to write a really big check, it&#8217;s all a theoretical assessment of the risk of getting caught.</p><p>For many tech startups, that&#8217;s a risk you deal with later after the product is launched and people are already using it. What could possibly go wrong?</p><h4>Risk, compliance and culture</h4><p>Risk and compliance are also culture issues. In some places, compliance is a given because the social contract and practical priorities are to reduce friction, make it easy to understand what is expected, and have reality conform to people&#8217;s expectations as much as possible. </p><p>This is why in Japan, the trains always run on time and compliance is generally baked into everything.</p><p>In the US, risk of getting caught and whether violating the law is more profitable than not is a business decision. After all, it just comes down to money, hassle, and luck. And there are plenty of people willing to roll the dice.</p><h4>What will happen?</h4><p>Some tech companies will see this lawsuit and decide that maybe the risk is bigger than they thought and start figuring out how to put the right stand-alone notice and consent forms into every job application ever. It&#8217;s not that difficult; the forms and processes already exist. </p><p>In this case, Eightfold will fight the class certification because that&#8217;s where all the money is. If they lose, they will settle. If they win, they will settle for a whole lot less. But since the fix is easy going forward, then the strategy is to avoid public trials, bad decisions, and even worse, bad precedent on appeal. This is a one-off that will hang around for a while and then disappear. There will be some others similar lawsuits riding on the coat tails in the short term. Still as a practical matter, this issue is going away because providing the notice and consent forms is not that big a deal.</p><p>For HR, the risk is bigger because once discovery opens and they have to produce documents related to every time they used a tech program in employment decisions, the costs, resources, and hassles are significant. Even though the potential employers are not being sued in the Eightfold case, they&#8217;re the ones with all the evidence. So they will have to hire staff and attorneys to deal with the subpoenas and records.</p><p>This is enough to make buyers cautious. And they&#8217;re already concerned about AI.</p><p>This is also a boon to legacy HR tech companies who have far more at stake if their products cause stress and liability to their customers. They tend to play it safe, bake compliance into everything they do, and not take on legal risk like it&#8217;s the Wild West. They are the adults in the room, playing the long game according to the rules. They will also likely be around tomorrow and a few years from now if you plan on actually using the products for a while&#8212;like maybe the whole contract term. (I&#8217;m a little fed up with companies that don&#8217;t really care about doing business and being companies. See also private equity.)</p><p>As for lawmakers, notice and an opportunity to maybe fix stuff after it&#8217;s too late is not enough. At the same time, making rules that give employers enough flexibility to run their companies while giving applicants and employees meaningful protections that are fair and have real consequences is difficult. But necessary. </p><p>We&#8217;re seeing lots of experimenting with notice and consent. We&#8217;re also seeing requirements for data and analysis and reporting, which I&#8217;m generally a fan of because it forces employers and tech companies to see the problems and actually do something to fix them.</p><p>In the meantime, we should all be more focused on transparency, trust, and integrity. Especially integrity.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading HRExaminer! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Learning by Mistake]]></title><description><![CDATA[by Heather Bussing]]></description><link>https://www.hrexaminer.com/p/learning-by-mistake</link><guid isPermaLink="false">https://www.hrexaminer.com/p/learning-by-mistake</guid><dc:creator><![CDATA[Heather Bussing]]></dc:creator><pubDate>Fri, 16 Jan 2026 22:37:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!G8xx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50c40038-7520-46a9-8ace-023ac465b671_679x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!G8xx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50c40038-7520-46a9-8ace-023ac465b671_679x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!G8xx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50c40038-7520-46a9-8ace-023ac465b671_679x1024.png 424w, https://substackcdn.com/image/fetch/$s_!G8xx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50c40038-7520-46a9-8ace-023ac465b671_679x1024.png 848w, https://substackcdn.com/image/fetch/$s_!G8xx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50c40038-7520-46a9-8ace-023ac465b671_679x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!G8xx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50c40038-7520-46a9-8ace-023ac465b671_679x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!G8xx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50c40038-7520-46a9-8ace-023ac465b671_679x1024.png" width="679" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/50c40038-7520-46a9-8ace-023ac465b671_679x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:679,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:726028,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hrexaminer.com/i/184819505?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50c40038-7520-46a9-8ace-023ac465b671_679x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!G8xx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50c40038-7520-46a9-8ace-023ac465b671_679x1024.png 424w, https://substackcdn.com/image/fetch/$s_!G8xx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50c40038-7520-46a9-8ace-023ac465b671_679x1024.png 848w, https://substackcdn.com/image/fetch/$s_!G8xx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50c40038-7520-46a9-8ace-023ac465b671_679x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!G8xx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50c40038-7520-46a9-8ace-023ac465b671_679x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The hard way is how I&#8217;ve learned everything that&#8217;s important to me.</p><p>It&#8217;s okay to fall down, screw up, make mistakes, and figure it out by doing it wrong. Transformation is a process. It usually isn&#8217;t pretty.</p><p>Take learning to ski. There you are at the top of the mountain in your cute jacket and sunscreen lip gloss. Then you notice the snow, and cold, and wind, and how weird your feet feel in boot jail. It dawns on you those feet are attached to very long slippery boards. The only way to hot chocolate is to skid down a mountain. </p><p>What were you thinking?</p><p>Suddenly, you have a whole new appreciation for gravity and friction.</p><p>So you fall on your ass all the way down. Small children go zipping past, laughing as you hobble up the hill to collect your other ski.</p><p>When you get back to the lifts wondering if your tailbone is broken, your cheerful, warm, dry friends tell you to do it again. You wonder whether to kill yourself, or kill them first.</p><p>The bad news is, it never ends. You will always be learning something new, and screwing up while you do.</p><p>The good news is, if there is no end, then any place in the process is no better, and no worse, than anywhere else. It&#8217;s just an infinite number of points on a infinite line. So, it&#8217;s always okay to be exactly where you are&#8211;even when that place feels rotten. It&#8217;s just the place for this moment. And it will change.</p><p>No matter what, you will never be fixed, perfect, or even normal, because there&#8217;s no such thing.</p><p>As long as you can see a tiny way past your fear, you are doing the right thing. Even when you can&#8217;t see and just want to hide under the covers with a pile of dark-chocolate salted caramels, you are still doing the right thing. Rest is good. Chocolate is especially good.</p><p>Quit judging yourself about where you are&#8211;you&#8217;re there. You won&#8217;t be there long. And the more you beat yourself up and tell yourself it&#8217;s not okay to be there, the more stuck you will feel.</p><p>So your fears came true. Welp. You don&#8217;t have to be afraid of <em>that </em>anymore.</p><p>Start where you are&#8211;it&#8217;s not like you really have a choice. If it sucks&#8211; okay. So it sucks. What&#8217;s the next right thing?</p><p>Try not to act from a place of fear or anger. That&#8217;s not always possible. Make amends when you do. Take responsibility for what is yours. Don&#8217;t take responsibility for what isn&#8217;t yours. And if you&#8217;re confused about that, confused is a great place to start.</p><p>You get do-overs every minute if you need them.</p><p>Some of my &#8220;best&#8221; achievements and events turned out to be huge mistakes. (Don&#8217;t have children with your starter husband.) And some of the most awful things I&#8217;ve gone through have been the most precious gifts. (Sometimes you marry the wrong guy to get the right kids.) Sometimes the best and worst are the same thing.</p><p>So when things are going sideways and nothing feels right, you&#8217;re just in the process of learning something new. Relax, and keep going. And apply chocolate as needed</p><p>.</p>]]></content:encoded></item><item><title><![CDATA[Fear and Integrity. Leadership Requires Courage]]></title><description><![CDATA[Leadership Requires Courage. Navigating the Line Between Fear and Integrity.]]></description><link>https://www.hrexaminer.com/p/hrex-v106-jon-duffy-on-leadership</link><guid isPermaLink="false">https://www.hrexaminer.com/p/hrex-v106-jon-duffy-on-leadership</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Fri, 16 Jan 2026 13:56:15 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/184582061/3b8ef3b370bc41d4add5b582d27c0e0e.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Jon Duffy is the real deal.</p><p>Thirty years in the Navy. Commanded destroyers and cruisers. Ran a squadron of nine ships and 3,000 sailors out of Japan. Then landed on the National Security Council as Director of Defense Policy and Strategy during the Obama years. The guy&#8217;s been in the Situation Room. He&#8217;s shaped Pacific Fleet strategy.</p><p>So what does someone like that do next? He goes back to school. Doctoral work in leadership studies. Because the questions that kept him up at night on the bridge of a destroyer are the same ones haunting him now.</p><p>What is courage, really? Not the movie version. The real thing.</p><p>How does fear warp the decisions leaders make&#8212;even the good ones?</p><p>And here&#8217;s the hard one: Why do smart, successful people throw their integrity overboard precisely when it matters most?</p><p>Jon&#8217;s seen leadership up close from places most of us will never go. He&#8217;s got things to say. Worth listening to.<br><br><strong>Show Notes</strong><br><strong>Keywords</strong><br>leadership, military, Navy, executive coaching, strategy, moral leadership, relational leadership, trust, coherence, uncertainty</p><p><strong>Summary<br></strong>In this episode of the HR Examiner podcast, host John Sumser speaks with John Duffy, a graduate student and former Navy officer, about his extensive leadership experience in the military and his transition to academic and corporate leadership roles. Duffy shares insights on the moral and relational aspects of leadership, emphasizing the importance of trust and coherence over control, especially in high-pressure situations.</p><p><strong>Takeaways</strong></p><ul><li><p>Leadership is a very moral and relational act.</p></li><li><p>It&#8217;s about coherence and trust.</p></li><li><p>Courage to stay honest under pressure is essential.</p></li><li><p>Leadership is something you practice and immerse yourself in.</p></li><li><p>It&#8217;s less about control than it is about trust.</p></li><li><p>Duffy spent 30 years in the US Navy.</p></li><li><p>He commanded a squadron of nine destroyers.</p></li><li><p>Duffy has experience in corporate America and consulting.</p></li><li><p>He is currently pursuing a doctorate in leadership studies.</p></li><li><p>Duffy is exploring what&#8217;s next after his military career.</p></li></ul><p></p><p>Transcript</p><p></p>]]></content:encoded></item><item><title><![CDATA[Leadership, Fear, and Courage]]></title><description><![CDATA[Managing is not leading.]]></description><link>https://www.hrexaminer.com/p/leadership-fear-and-courage</link><guid isPermaLink="false">https://www.hrexaminer.com/p/leadership-fear-and-courage</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Wed, 14 Jan 2026 13:43:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WMZI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49b7fab2-c3a2-4aa0-90a0-23c617e7c2a3_6000x4000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WMZI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49b7fab2-c3a2-4aa0-90a0-23c617e7c2a3_6000x4000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WMZI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49b7fab2-c3a2-4aa0-90a0-23c617e7c2a3_6000x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!WMZI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49b7fab2-c3a2-4aa0-90a0-23c617e7c2a3_6000x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!WMZI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49b7fab2-c3a2-4aa0-90a0-23c617e7c2a3_6000x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!WMZI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49b7fab2-c3a2-4aa0-90a0-23c617e7c2a3_6000x4000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WMZI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49b7fab2-c3a2-4aa0-90a0-23c617e7c2a3_6000x4000.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/49b7fab2-c3a2-4aa0-90a0-23c617e7c2a3_6000x4000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4041567,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hrexaminer.com/i/183486493?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49b7fab2-c3a2-4aa0-90a0-23c617e7c2a3_6000x4000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WMZI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49b7fab2-c3a2-4aa0-90a0-23c617e7c2a3_6000x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!WMZI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49b7fab2-c3a2-4aa0-90a0-23c617e7c2a3_6000x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!WMZI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49b7fab2-c3a2-4aa0-90a0-23c617e7c2a3_6000x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!WMZI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49b7fab2-c3a2-4aa0-90a0-23c617e7c2a3_6000x4000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Most of what I read about leadership might be best described as &#8216;how to wield power.&#8217; The underlying question is &#8216;how do I get people to do what I want them to do?&#8217; The jumble of buzzwords sounds like &#8216;working to inspire and guide people toward achieving goals through effective strategy, delegation, and fostering a positive culture.&#8217; Or, &#8220;How to get people to do what I want them to do.&#8221;</p><p>A scan through the internet yields lists of &#8216;leadership skills and traits. Here&#8217;s a sample:</p><p>&#183; Accountability</p><p>&#183; Adaptability</p><p>&#183; Confidence</p><p>&#183; Creativity</p><p>&#183; Empathy</p><p>&#183; Focus</p><p>&#183; Positivity</p><p>&#183; Risk Taking</p><p>&#183; Stability</p><p>&#183; Team-building</p><p>That misses the boat. Leadership is about the paradoxical task of caring for followers while navigating the chaos. I like to describe it this way:</p><p>&#8216;Imagine you are driving across the Golden Gate Bridge on the foggiest of foggy San Francisco mornings. It&#8217;s so thick that you can only see the reflection of your headlights and the size and shape of droplets in the mist. While the people behind you get to follow your taillights, you feel your way through knowing that their safety depends on you. From behind, leadership looks easy..you just follow the taillights. From the leadership position, the job is to translate situational uncertainty and danger into a path that can be followed.&#8217;</p><p>In other words, the central energy in leadership is courage.</p><p>The central chore of a leader is to navigate the line between fear and integrity. It&#8217;s a goldilocks problem. If you are lucky as a leader, there is a lot of room to maneuver. When the risks are small and the reward is small, navigating decision making is not that hard. Follow the rules, hope for the best, and expect the consequences of mistakes to be minimal.</p><p>This is the world of managers. The task boils down to getting people to do what you want them to do when the risks are low and the reward is small. Passing strategic direction straight through without question. Hoping to get most of your bonus. Helping the team to get most of theirs. </p><p>It&#8217;s driving across the bridge on a sunny day.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/subscribe?"><span>Subscribe now</span></a></p><p>Leadership gets way more complicated when the stakes are high and the risks are severe. You can best see real leadership when the leader&#8217;s personal risks include loss of life, loss of job, loss of career, major financial, loss of status, serious physical injury. You know, the scary stuff. </p><p>Low risk, low reward environments are a manager&#8217;s paradise. Bullying is enforced high risk with low reward. Investment nirvana is a low risk high reward environment. Heroism is what is called for when the risks are high and the stakes are equally high or higher.</p><p>Part of what we are seeing on the national landscape is an unsurprisingly human response to leadership by bullying. Job hugging (hanging on to the job as hard as possible) appears to be motivating our political and business leadership. A relentless focus on EBITDA (gross profit) regardless of human consequence has taken root. It&#8217;s as if the private equity industry took over the society.</p><p>That&#8217;s where courage comes in. Bullying is about following orders blindly. Leadership is about seeing the consequences.</p><p>At a managerial level, the consequence of job hugging is a numbing of effective judgment. It&#8217;s a slippery slope. One unnecessary compromise leads to the next. The important parts of caring for the drivers following your taillights vanish in a flurry of &#8216;I&#8217;m  the only one that matters&#8217; rationalization </p><p>Then, it&#8217;s a short distance to getting comfortable with layoffs two weeks before the holidays. When fear overcomes integrity, bad behavior emerges. Darwinian &#8216;me or them&#8217; decision making replaces the primary obligation of leadership: providing taillights.</p><p>Real leadership uses power to improve outcomes. It would be nice if it were as easy as that sounds. When the best outcome falls outside of the dictated window is when real leadership emerges. It involves saying no (and encouraging followers to say no) when an order is immoral, illegal, or incorrect.</p><p>There is much more to say about the struggle between fear and integrity and how that is the substance of great leadership.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/p/leadership-fear-and-courage?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/p/leadership-fear-and-courage?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[All We Get is Faster Horses]]></title><description><![CDATA[Reimagining is hard]]></description><link>https://www.hrexaminer.com/p/all-we-get-is-faster-horses</link><guid isPermaLink="false">https://www.hrexaminer.com/p/all-we-get-is-faster-horses</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Sat, 10 Jan 2026 13:53:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!4sER!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F285a62a5-8f14-4715-8c43-685b99848eb6_2094x1406.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4sER!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F285a62a5-8f14-4715-8c43-685b99848eb6_2094x1406.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4sER!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F285a62a5-8f14-4715-8c43-685b99848eb6_2094x1406.png 424w, https://substackcdn.com/image/fetch/$s_!4sER!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F285a62a5-8f14-4715-8c43-685b99848eb6_2094x1406.png 848w, https://substackcdn.com/image/fetch/$s_!4sER!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F285a62a5-8f14-4715-8c43-685b99848eb6_2094x1406.png 1272w, https://substackcdn.com/image/fetch/$s_!4sER!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F285a62a5-8f14-4715-8c43-685b99848eb6_2094x1406.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4sER!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F285a62a5-8f14-4715-8c43-685b99848eb6_2094x1406.png" width="1456" height="978" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/285a62a5-8f14-4715-8c43-685b99848eb6_2094x1406.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:978,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1984902,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hrexaminer.com/i/181285579?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F285a62a5-8f14-4715-8c43-685b99848eb6_2094x1406.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4sER!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F285a62a5-8f14-4715-8c43-685b99848eb6_2094x1406.png 424w, https://substackcdn.com/image/fetch/$s_!4sER!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F285a62a5-8f14-4715-8c43-685b99848eb6_2094x1406.png 848w, https://substackcdn.com/image/fetch/$s_!4sER!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F285a62a5-8f14-4715-8c43-685b99848eb6_2094x1406.png 1272w, https://substackcdn.com/image/fetch/$s_!4sER!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F285a62a5-8f14-4715-8c43-685b99848eb6_2094x1406.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The hard part about technical disruption like we are facing is reimagining things. How do you imagine the unimaginable?</p><p><strong>&#8220;If I had asked people what they wanted, they would have said faster horses.&#8221;</strong></p><p>There is no evidence that Henry Ford actually said this verbatim. But the folk legend is treated as fact because it so accurately encompasses his philosophy. Said another way, &#8220;<strong>People ask for better versions of what they know. Leaders build what they do not yet know they need.&#8221;</strong></p><p>Here are the underlying ideas:</p><ul><li><p><strong>Customers imagine improvements to what they already know</strong></p><ul><li><p>People frame needs in the language of existing solutions.</p></li></ul></li><li><p><strong>Visionary innovators identify the underlying needs</strong></p><ul><li><p>Ford recognized the real need was <strong>faster, cheaper, more reliable transportation</strong> &#8212; not better horses.</p></li></ul></li><li><p><strong>Breakthroughs come from redefining the problem</strong></p><ul><li><p>The automobile solved the transportation problem far beyond incremental improvement.</p></li></ul></li><li><p><strong>Market research alone is insufficient</strong></p><ul><li><p>Customers cannot imagine solutions that do not yet exist.</p></li></ul></li></ul><p>Reimagining is hard. Our educational institutions are focused on the delivery of the right answer. We&#8217;ve created a system that rewards regurgitation rather than insight. We teach problem solving rather than problem discovery. Unfortunately, the next thing is never a faster version of the current thing.</p><p>It is the rare teacher who seeks to be challenged. Modern transformation failures are rarely technical. They are failures of: imagination, framing, and system redesign.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/subscribe?"><span>Subscribe now</span></a></p><p>My favorite example is the keyboard. The image at the top of the article is a classic attempt to build a faster horse. It&#8217;s <a href="https://www.yankodesign.com/2025/10/03/gboard-dial-version-spins-keyboard-design-in-a-playful-new-direction/">Google Japan&#8217;s GBoard Dial Version</a>. The basic flaw of a keyboard as an input device is that it&#8217;s a keyboard. You can&#8217;t solve the fundamental problem by redesigning the 19th Century technology that causes it.</p><p>Over 150 years old, the keyboard forces thought to flow at the speed of fingers. (In my case, that&#8217;s very slow.)  Designed as a typesetting device (where it was a new way of doing things), this primary tool hobbles our ability to fully utilize the computing capacity we already have. While there are a few fledgling attempts to close the mind computer gap (<a href="https://neuralink.com/">Neuralink</a>, for example), none show real progress towards the elimination of this fundamental bottleneck.</p><p>As a result, we are stuck with <a href="https://en.wikipedia.org/wiki/The_Mother_of_All_Demos">50 year old metaphors</a> for our relationship with our machines. In case you haven&#8217;t noticed, your monitor is not a desktop at all. The idea of the desktop is a metaphor designed to make relating to a computer easier. The struggle to break free of that metaphor is part of why AI seems simultaneously foreign and natural.</p><p>The unreasonable demand that AI should be instantaneously profitable and widely adopted is the actual bubble. Technology revolutions move slowly until they move quickly. In the keyboard&#8217;s case, it&#8217;s very slowly.</p><p><strong>Listen for ideas that seem outlandish</strong>. The next thing is going to sound simplistic, be of low quality intially, and take meaningful amounts of time to be digested. Right now, we are getting faster horses. We are using new tech to make dumb things into faster dumb things.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/p/all-we-get-is-faster-horses?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/p/all-we-get-is-faster-horses?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p><p></p><p></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI Is Very Easy to Trust]]></title><description><![CDATA[And that's a problem]]></description><link>https://www.hrexaminer.com/p/ai-is-very-easy-to-trust</link><guid isPermaLink="false">https://www.hrexaminer.com/p/ai-is-very-easy-to-trust</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Wed, 07 Jan 2026 15:34:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Tmxe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cb93baa-c24f-4471-a4ba-fce76567fbe2_468x624.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Tmxe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cb93baa-c24f-4471-a4ba-fce76567fbe2_468x624.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Tmxe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cb93baa-c24f-4471-a4ba-fce76567fbe2_468x624.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Tmxe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cb93baa-c24f-4471-a4ba-fce76567fbe2_468x624.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Tmxe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cb93baa-c24f-4471-a4ba-fce76567fbe2_468x624.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Tmxe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cb93baa-c24f-4471-a4ba-fce76567fbe2_468x624.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Tmxe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cb93baa-c24f-4471-a4ba-fce76567fbe2_468x624.jpeg" width="468" height="624" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0cb93baa-c24f-4471-a4ba-fce76567fbe2_468x624.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:624,&quot;width&quot;:468,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:80490,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hrexaminer.com/i/183174242?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cb93baa-c24f-4471-a4ba-fce76567fbe2_468x624.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Tmxe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cb93baa-c24f-4471-a4ba-fce76567fbe2_468x624.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Tmxe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cb93baa-c24f-4471-a4ba-fce76567fbe2_468x624.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Tmxe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cb93baa-c24f-4471-a4ba-fce76567fbe2_468x624.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Tmxe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cb93baa-c24f-4471-a4ba-fce76567fbe2_468x624.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>The HRExaminer email and workspace accounts are managed by Google. These days, Google is occasionally adding summaries to the top of email threads. Yesterday, I got a piece of email at my HRExaminer address. It let me know that my frozen dog food order would be delivered by the 19th.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>At least I thought that was what it said.</p><p>We have a regular shipment scheduled every four weeks. Frozen dog food can take up a lot of freezer space. So, I have worked diligently to figure out the right timing and quantity to keep Carlos fed while minimizing freezer space. I have had to stop using some providers because their supply chain overloaded my freezer no matter what I did.</p><p>The email said, I thought, that the shipment would be two weeks later than expected.</p><p>There&#8217;s an emotional state that involves a nagging level of concern. Not enough to do something right away and not enough to look deeply. It&#8217;s the state in which you try to remember to put something on the to do list or the needs further research list. Instead, it lives like a little nagging creature on your shoulder. </p><p>Ultimately, I did what you do. I waded through the inconvenience of remembering the company domain name, getting my user name so that I can reset my password (again), passing through the various security/ identity verification steps, and finally looking at my account. </p><p>I don&#8217;t know about you but when I finally solve a problem like timing in my personal supply chain, I put the whole thing on autopilot and more or less forget it. That makes the digging all the more painful.</p><p>I took a look at my order history and, much to my delight, discovered that the shipment was on time and would be delivered as expected. What a relief. No plan &#8216;B&#8217;s to create, no grumpy letters to write. Just a question about how Google could have been so wrong.</p><p>I pulled the email back up. The google provided summary, bold text on a gray background. said &#8216;Your shipment should arrive by Jun 19th.&#8217; Now that I looked a little more closely, it became clear that I scanned the note and read Jun 19th as Jan 19th. </p><p>I don&#8217;t usually get the opportunity to confuse June and January. The email, dated late on Dec 31st, was summarized as a notification of a shipment in early summer. The mistake generated real bandwidth loss and consumption.</p><p>I suppose that I should have double checked the email with a more critical eye. No one is more concerned that I am about the tendency for contemporary AI to be wrong. I should have but I just don&#8217;t have the energy to watch my machine for every one of its mistakes.</p><p>The material consequence of this AI failure was that I used ChatGPT to figure out how to turn the functionality off. Then I turned it off. I do not like having my time wasted by a machine.</p><p>What the leading AI companies don&#8217;t seem to understand is that they are eroding trust in our fundamental communications infrastructure. (It&#8217;s already suffered an enormous amount of degradation.)  When I go to the effort of figuring out how to turn off an uninvited slug of functionality, I am not likely to turn it back on.</p><p>You might well be thinking, &#8216;that&#8217;s an awfully trivial complaint&#8217; or &#8216;John seems to be over-investing in solving trite problems&#8217;.<br><br>Trust is often freely given at the start of things. No matter how loud and frequent the warnings, people want to rely on their machines. It is dreadfully difficult to get things done when you don&#8217;t trust your tools. Trust breaks easily and then has to be rebuilt through an arduous process of getting little things right. In the end, it&#8217;s the little things that matter.</p><p>I am already starting to hear stories about people getting fed up with the inconsistency, instability and error rates of the current crop of intelligent tools. (They are more sensitive to having their time wasted than I am.) It may be that we are nearing an inflection point where the technology no longer gets the benefit of the doubt.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/p/ai-is-very-easy-to-trust?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading HRExaminer!                Please share the article</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hrexaminer.com/p/ai-is-very-easy-to-trust?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hrexaminer.com/p/ai-is-very-easy-to-trust?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p><br></p><p></p>]]></content:encoded></item><item><title><![CDATA[HREX v1.05 ADP's Naomi Lariviere]]></title><description><![CDATA[The integration of AI ethics, data governance, and responsible product development]]></description><link>https://www.hrexaminer.com/p/hrex-v105-naomi-lariviere</link><guid isPermaLink="false">https://www.hrexaminer.com/p/hrex-v105-naomi-lariviere</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Mon, 05 Jan 2026 17:09:54 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/183198541/8ea855de6ad68d91355a73dc8cd4811d.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Naomi Lariviere is a senior technology and product leader at ADP specializing in the integration of AI ethics, data governance, and responsible product development. She leads initiatives that embed telemetry, data quality, and ethical guardrails directly into the product lifecycle. At ADP, she plays a central role in shaping how advanced analytics and AI are deployed safely and transparently for over one million clients, with a strong focus on trust, compliance, and human-centered decision support.</p><p>In this interview, Naomi explains embedding data quality, telemetry, and ethics into every stage of product development. She emphasizes that high-quality, contextual data is foundational before any AI or LLM deployment and that missing data is treated as a signal rather than a flaw. ADP uses deterministic, rules-based AI for high-risk domains such as payroll while avoiding probabilistic AI where accuracy must be absolute. Clients retain decision authority, supported by transparency into how outputs are generated. Lariviere stresses careful AI use in sensitive areas like performance reviews and highlights bias testing, independent audits, experimentation, and human curiosity as essential safeguards for responsible AI adoption.</p><h3><strong>I. Core Philosophy</strong></h3><ul><li><p>Data quality is the foundation of AI effectiveness</p></li><li><p>Telemetry and continuous feedback guide product evolution</p></li><li><p>AI must preserve human agency and transparency</p></li></ul><h3><strong>II. AI Deployment Principles at ADP</strong></h3><ul><li><p>Deterministic vs. probabilistic AI pathways</p></li><li><p>High-risk areas (e.g., payroll) require deterministic certainty</p></li><li><p>LLMs used only after data fitness validation</p></li></ul><h3><strong>III. Governance &amp; Trust</strong></h3><ul><li><p>Control towers / orchestration layers manage AI behavior</p></li><li><p>Clients always review, modify, and approve AI outputs</p></li><li><p>Trust built through explainability and gradual automation</p></li></ul><h3><strong>IV. Sensitive Use Cases &amp; Regulation</strong></h3><ul><li><p>Cautious stance on performance reviews and career decisions</p></li><li><p>Heavy emphasis on compliance, regulation, and experimentation boundaries</p></li></ul><h3><strong>V. Bias &amp; Responsible AI</strong></h3><ul><li><p>Bias mitigation through diverse teams, data audits, and third-party testing</p></li><li><p>Responsible AI as ongoing discipline, not a one-time fix</p></li></ul><h3><strong>VI. Innovation &amp; Problem Reframing</strong></h3><ul><li><p>Organizations must relearn how to define problems</p></li><li><p>Curiosity, experimentation, and human creativity remain irreplaceable</p></li></ul>]]></content:encoded></item><item><title><![CDATA[HREX Cast v1.04 Matt Charney]]></title><description><![CDATA[The sharpest mind in HRTech/WorkTech]]></description><link>https://www.hrexaminer.com/p/hrex-cast-v104-matt-charney</link><guid isPermaLink="false">https://www.hrexaminer.com/p/hrex-cast-v104-matt-charney</guid><dc:creator><![CDATA[John Sumser]]></dc:creator><pubDate>Fri, 02 Jan 2026 01:20:05 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/183187481/4405a6064d05e522408b3d8294db57ab.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Matt Charney is a leading independent voice in HR technology and talent strategy, known for combining deep domain expertise with a sharp, candid writing style. With experience spanning recruiting, editorial leadership, industry analysis, and advisory work for startups, private equity, and acquiring firms, Charney brings a rare full-spectrum view of the HR tech ecosystem. His work consistently cuts through hype&#8212;especially around AI&#8212;to focus on real business outcomes, market structure, and human impact.</p><p>n this wide-ranging conversation, Matt Charney, John Sumser, and Heather Bussing explore the realities behind HR technology, AI hype, recruiting economics, and the future of work. Charney emphasizes disciplined research, product differentiation, and outcome-driven evaluation over marketing noise. The group critiques the overuse of AI as branding, the misalignment between enterprise-focused HR tech and the needs of smaller employers, and the growing tendency to treat workers as interchangeable components. They examine economic risks tied to AI investment bubbles, the human cost of automation, and the urgent need for creativity, ethics, and governance as technology accelerates. Together, the discussion offers a grounded, often contrarian view of the forces reshaping hiring and work.</p><p>Below is the revised version in the exact structure requested.</p><div><hr></div><h2><strong>Full Structured Summary</strong></h2><h3><strong>1. Writing, Voice &amp; Credibility</strong></h3><p>Charney argues that voice, tone, and style drive engagement, especially in the era of generative AI. However, he rejects &#8220;hot takes,&#8221; insisting instead on rigorous research and third-party validation. His writing process begins with a hypothesis, tested through data, then shaped into narrative&#8212;adding voice only after the facts are secure.</p><h3><strong>2. Professional Identity &amp; Current Roles</strong></h3><p>Charney primarily identifies as a writer, supported by analytical and editorial work. He currently serves as principal analyst for industry and markets (with focus on startups, VC, M&amp;A, and ecosystems), holds editorial leadership roles at ERE Media and Media Bistro, and actively develops new industry voices through paid contributor programs. His scope intentionally extends beyond traditional enterprise HR toward emerging innovation.</p><h3><strong>3. How He Evaluates HR &amp; AI Products</strong></h3><p>Charney applies three core product questions:</p><ol><li><p>Does it make life tangibly easier for users?</p></li><li><p>How does the company actually make money?</p></li><li><p>What is the single most differentiating feature?</p><p>He criticizes HR tech&#8217;s chronic inability to clearly articulate value and warns that AI has become marketing camouflage rather than meaningful differentiation.</p></li></ol><h3><strong>4. AI, Recruiting &amp; the Human Cost</strong></h3><p>Charney predicts recruiting budgets will continue shifting from headcount to AI tools, increasing pressure to demonstrate ROI and regulatory compliance. He argues that recruiting effectiveness still hinges on process, alignment with business goals, and human relationships&#8212;AI mainly enables scale, not better hiring.</p><h3><strong>5. Small Business vs. Enterprise Reality</strong></h3><p>The group highlights a fundamental disconnect: most recruiting challenges occur outside large enterprises, yet HR technology is built almost exclusively for enterprise buyers. This misalignment leaves the majority of employers underserved.</p><h3><strong>6. Ethics, Automation &amp; the Future of Work</strong></h3><p>They explore the danger of &#8220;widgetizing&#8221; people&#8212;reducing workers to process units&#8212;at the expense of creativity, judgment, and meaning. While technology could ultimately free humans from work, the outcome depends on governance, incentives, and ethical leadership.</p><h3><strong>7. Economic Outlook</strong></h3><p>Charney and Sumser foresee near-term turbulence driven by AI hype, fragile capital structures, and recession indicators, followed by long-term transformation once speculative bubbles reset.</p><div><hr></div><p></p>]]></content:encoded></item></channel></rss>