{"id":88431,"date":"2026-04-08T09:17:05","date_gmt":"2026-04-08T07:17:05","guid":{"rendered":"https:\/\/www.humanlevel.com\/uncategorized\/h-neurons-un-gran-avance-para-reducir-las-alucinaciones-de-los-llms"},"modified":"2026-04-08T09:17:05","modified_gmt":"2026-04-08T07:17:05","slug":"h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations","status":"publish","type":"post","link":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations","title":{"rendered":"H-Neurons: a major breakthrough in reducing hallucinations in LLMs"},"content":{"rendered":"<p>Large language models give us false or erroneous answers with a baffling level of conviction. If you use AI tools to research, generate content, or make decisions, this behavior poses a real reliability problem. A recent study published in arXiv identifies for the first time the artificial neurons responsible for this phenomenon and demonstrates that it is possible to mitigate it surgically.<\/p>\n<h2>Why models hallucinate<\/h2>\n<p>Any AI architecture that relies on neural networks, such as the <em>transformers<\/em> used by LLMs, will always be subject to an error rate. Scaling a model (more networks and training data) reduces certain types of errors, but it does not eliminate hallucinations. The <a href=\"https:\/\/artificialanalysis.ai\/evaluations\/omniscience\">Omniscience benchmark by Artificial Analysis<\/a> illustrates this clearly: <strong>even the largest models on the market continue to generate incorrect responses with high apparent confidence.<\/strong><\/p>\n<p>Hallucinations are due to several factors:<\/p>\n<ul>\n<li><strong>The quality or lack of training data<\/strong> for certain questions.<\/li>\n<li><strong>Models are trained to predict the most probable next word, not the most truthful one<\/strong>. The most repeated response in the training data usually coincides with the correct one, but not always. Recall Einstein&#8217;s case, when a hundred scientists wanted to refute his theory of relativity, to which he replied: &#8220;Why a hundred? If I were wrong, one would be enough.&#8221;<\/li>\n<li>During the supervised fine-tuning stage (SFT), the model is trained with human dialogues to follow instructions. If in those dialogues <strong>people were too sycophantic<\/strong> or avoided acknowledging they didn&#8217;t know something, the <strong>model will imitate those behaviors.<\/strong><\/li>\n<li><strong>RLHF (Reinforcement Learning from Human Feedback),<\/strong> a fine-tuning of the model&#8217;s parameters during post-training, introduces a similar distortion: human reviewers tend to validate answers that <em>seem<\/em> correct over honest answers that admit uncertainty.<\/li>\n<\/ul>\n<p>The result is a model that prefers to confabulate rather than say &#8220;I don&#8217;t know.&#8221;<\/p>\n<h2>Neuroscience applied to the AI brain<\/h2>\n<p>The study <a href=\"https:\/\/arxiv.org\/abs\/2512.01797\"><em>H-Neurons: On the Existence, Impact, and Origin of<\/em> <em>Hallucination-Associated Neurons in LLMs<\/em><\/a> addresses the problem from a different angle: instead of intervening on the data or the training process, researchers dissect the already trained model to identify <strong>which artificial neurons are directly responsible for hallucinations.<\/strong><\/p>\n<p>To do this, they set the model to maximum creativity and repeatedly ask it the same question for which they know the answer and to which the model always responds incorrectly. In this analysis, the CETT (<em>Causal Effect on Token Trajectory<\/em>) metric is obtained from each neuron, which measures the contribution of each neuron to the model&#8217;s internal state between layers of neurons. Using an analogy, it would be like knowing which neurons contribute most at each step of your subconscious thoughts before reaching an answer. Then, the CETT value information of all the neurons in the model is passed to a linear classifier, which is a type of AI algorithm that, in this case, separates the &#8220;good&#8221; neurons from the ones that hallucinate.<\/p>\n<p>By testing different questions, they discovered that <strong>the H-Neurons (as they call the hallucinatory neurons) are the same regardless of the question asked.<\/strong><\/p>\n<p><img class=\"gb-media-e4dc9c5b aligncenter\" title=\"neuroscience\" src=\"https:\/\/www.humanlevel.com\/wp-content\/uploads\/neurociencia.jpg\" alt=\"\" \/><\/p>\n<h2>A minuscule proportion with a disproportionate effect<\/h2>\n<p>The number of H-Neurons is incredibly small, approximately only <strong>1 in every 100,000.<\/strong> Thus, out of the more than 100 billion parameters that large language models currently have, <strong>it is only necessary to adjust a few to achieve better responses.<\/strong><\/p>\n<p>The experiment also revealed an unforeseen connection: reducing the weights or parameters of these neurons (equivalent to reducing the intensity of synapses between neurons in a biological brain) <strong>not only decreases hallucinations but also makes the model less sycophantic.<\/strong> Conversely, when those weights are increased to the maximum, hallucinations become more frequent and the model&#8217;s tendency to give the answer the user seems to want to hear increases, even if it is incorrect. Therefore, <strong>hallucinations and model sycophancy are intrinsically linked.<\/strong><\/p>\n<p>The study adds a relevant implication for safety: <strong>more sycophantic models are also more susceptible to bypassing their content restrictions.<\/strong> A model that prioritizes user satisfaction over truthfulness is, structurally, more manipulable.<\/p>\n<p>Despite the discovery, these hallucinatory neurons\u2014and with them, all hallucinations\u2014cannot be completely eliminated because <strong>they are interrelated with language understanding.<\/strong> Therefore, removing them would degrade the model&#8217;s ability to connect words. Their effect can be considerably mitigated, but we must remember that the other factors mentioned at the beginning of the article could still cause hallucinations.<\/p>\n<p><strong>The hallucination-correcting effect of this new technique is greater when the model is small<\/strong>, where there are fewer parameters and fewer neurons, meaning the proportion of H-Neurons is higher.<\/p>\n<figure><img class=\"gb-media-fa31854e\" title=\"study\" src=\"https:\/\/www.humanlevel.com\/wp-content\/uploads\/estudio.jpg\" alt=\"\" \/><figcaption class=\"gb-text\">H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs (Cheng Gao, Huimin Chen, Chaojun Xiao, Zhiyi Chen, Zhiyuan Liu, Maosong Sun)<\/figcaption><\/figure>\n<h2>What changes\u2014and what doesn&#8217;t\u2014for professionals using AI<\/h2>\n<p>This study does not resolve the problem of hallucinations in its entirety. Structural causes such as the quality of training data, optimization for probability instead of truthfulness, and RLHF biases remain intact. What it provides is a precise intervention lever on the already trained model, <strong>opening the possibility that future models incorporate this adjustment by default and, therefore, become more reliable.<\/strong><\/p>\n<p>Even so, no one knows if the problem will be fully resolved in the future. For that to happen, it will likely be necessary to radically change the way these models are trained or built. Whatever comes next, we will keep you informed on our blog.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>LLMs can respond with errors and high confidence. A new paper on arXiv locates the neurons that trigger&#8230;<\/p>\n","protected":false},"author":14,"featured_media":88433,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[459],"tags":[],"class_list":["post-88431","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>H-Neurons: Discovering the Neurons That Make LLMs Hallucinate | Human Level<\/title>\n<meta name=\"description\" content=\"A study identifies H-Neurons, the neurons responsible for hallucinations in LLMs, and demonstrates how to mitigate them to achieve more reliable responses.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"H-Neurons: a major breakthrough in reducing hallucinations in LLMs\" \/>\n<meta property=\"og:description\" content=\"A study identifies H-Neurons, the neurons responsible for hallucinations in LLMs, and demonstrates how to mitigate them to achieve more reliable responses.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations\" \/>\n<meta property=\"og:site_name\" content=\"Human Level\" \/>\n<meta property=\"article:published_time\" content=\"2026-04-08T07:17:05+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.humanlevel.com\/wp-content\/uploads\/h-neurons-en.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"675\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Ram\u00f3n Saquete\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@daiatron\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Ram\u00f3n Saquete\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":[\"Article\",\"BlogPosting\"],\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\\\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\\\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations\"},\"author\":{\"name\":\"Ram\u00f3n Saquete\",\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en#\\\/schema\\\/person\\\/11ad888926867985985a0210476bae94\"},\"headline\":\"H-Neurons: a major breakthrough in reducing hallucinations in LLMs\",\"datePublished\":\"2026-04-08T07:17:05+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\\\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations\"},\"wordCount\":895,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\\\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.humanlevel.com\\\/wp-content\\\/uploads\\\/h-neurons.jpg\",\"articleSection\":[\"AI\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\\\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\\\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations\",\"url\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\\\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations\",\"name\":\"H-Neurons: Discovering the Neurons That Make LLMs Hallucinate | Human Level\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\\\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\\\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.humanlevel.com\\\/wp-content\\\/uploads\\\/h-neurons.jpg\",\"datePublished\":\"2026-04-08T07:17:05+00:00\",\"description\":\"A study identifies H-Neurons, the neurons responsible for hallucinations in LLMs, and demonstrates how to mitigate them to achieve more reliable responses.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\\\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\\\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\\\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#primaryimage\",\"url\":\"https:\\\/\\\/www.humanlevel.com\\\/wp-content\\\/uploads\\\/h-neurons.jpg\",\"contentUrl\":\"https:\\\/\\\/www.humanlevel.com\\\/wp-content\\\/uploads\\\/h-neurons.jpg\",\"width\":272,\"height\":272,\"caption\":\"Descubrimiento de las H-Neurons\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\\\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Inicio\",\"item\":\"https:\\\/\\\/www.humanlevel.com\\\/en\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Blog\",\"item\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"AI\",\"item\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/blog\\\/artificial-intelligence\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"H-Neurons: a major breakthrough in reducing hallucinations in LLMs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en#website\",\"url\":\"https:\\\/\\\/www.humanlevel.com\\\/en\",\"name\":\"Human Level\",\"description\":\"Web positioning and online marketing consultant Human Level\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.humanlevel.com\\\/en?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en#organization\",\"name\":\"Human Level\",\"url\":\"https:\\\/\\\/www.humanlevel.com\\\/en\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.humanlevel.com\\\/wp-content\\\/uploads\\\/logohl25x3.png\",\"contentUrl\":\"https:\\\/\\\/www.humanlevel.com\\\/wp-content\\\/uploads\\\/logohl25x3.png\",\"width\":600,\"height\":93,\"caption\":\"Human Level\"},\"image\":{\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/company\\\/human-level-communications\",\"https:\\\/\\\/www.youtube.com\\\/user\\\/humanlevelcommunica\",\"https:\\\/\\\/bsky.app\\\/profile\\\/humanlevel.bsky.social\",\"https:\\\/\\\/instagram.com\\\/humanlevel\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/en#\\\/schema\\\/person\\\/11ad888926867985985a0210476bae94\",\"name\":\"Ram\u00f3n Saquete\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.humanlevel.com\\\/wp-content\\\/uploads\\\/1x1-ramon-saquete-26-96x96.jpg\",\"url\":\"https:\\\/\\\/www.humanlevel.com\\\/wp-content\\\/uploads\\\/1x1-ramon-saquete-26-96x96.jpg\",\"contentUrl\":\"https:\\\/\\\/www.humanlevel.com\\\/wp-content\\\/uploads\\\/1x1-ramon-saquete-26-96x96.jpg\",\"caption\":\"Ram\u00f3n Saquete\"},\"description\":\"Web Developer and Technical SEO Consultant at Human Level. He holds degrees in Computer Engineering and Technical Engineering in Computer Systems. He also earned a Higher Vocational Degree in Computer Applications Development and later obtained the Certificate of Pedagogical Aptitude (CAP). He is an expert in WPO and indexability.\",\"sameAs\":[\"https:\\\/\\\/es.linkedin.com\\\/in\\\/ramonsaquete\",\"https:\\\/\\\/x.com\\\/daiatron\"],\"url\":\"https:\\\/\\\/www.humanlevel.com\\\/en\\\/author\\\/ramon\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"H-Neurons: Discovering the Neurons That Make LLMs Hallucinate | Human Level","description":"A study identifies H-Neurons, the neurons responsible for hallucinations in LLMs, and demonstrates how to mitigate them to achieve more reliable responses.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations","og_locale":"en_US","og_type":"article","og_title":"H-Neurons: a major breakthrough in reducing hallucinations in LLMs","og_description":"A study identifies H-Neurons, the neurons responsible for hallucinations in LLMs, and demonstrates how to mitigate them to achieve more reliable responses.","og_url":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations","og_site_name":"Human Level","article_published_time":"2026-04-08T07:17:05+00:00","og_image":[{"width":1200,"height":675,"url":"https:\/\/www.humanlevel.com\/wp-content\/uploads\/h-neurons-en.jpg","type":"image\/jpeg"}],"author":"Ram\u00f3n Saquete","twitter_card":"summary_large_image","twitter_creator":"@daiatron","twitter_misc":{"Written by":"Ram\u00f3n Saquete","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":["Article","BlogPosting"],"@id":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#article","isPartOf":{"@id":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations"},"author":{"name":"Ram\u00f3n Saquete","@id":"https:\/\/www.humanlevel.com\/en#\/schema\/person\/11ad888926867985985a0210476bae94"},"headline":"H-Neurons: a major breakthrough in reducing hallucinations in LLMs","datePublished":"2026-04-08T07:17:05+00:00","mainEntityOfPage":{"@id":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations"},"wordCount":895,"commentCount":0,"publisher":{"@id":"https:\/\/www.humanlevel.com\/en#organization"},"image":{"@id":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#primaryimage"},"thumbnailUrl":"https:\/\/www.humanlevel.com\/wp-content\/uploads\/h-neurons.jpg","articleSection":["AI"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations","url":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations","name":"H-Neurons: Discovering the Neurons That Make LLMs Hallucinate | Human Level","isPartOf":{"@id":"https:\/\/www.humanlevel.com\/en#website"},"primaryImageOfPage":{"@id":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#primaryimage"},"image":{"@id":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#primaryimage"},"thumbnailUrl":"https:\/\/www.humanlevel.com\/wp-content\/uploads\/h-neurons.jpg","datePublished":"2026-04-08T07:17:05+00:00","description":"A study identifies H-Neurons, the neurons responsible for hallucinations in LLMs, and demonstrates how to mitigate them to achieve more reliable responses.","breadcrumb":{"@id":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#primaryimage","url":"https:\/\/www.humanlevel.com\/wp-content\/uploads\/h-neurons.jpg","contentUrl":"https:\/\/www.humanlevel.com\/wp-content\/uploads\/h-neurons.jpg","width":272,"height":272,"caption":"Descubrimiento de las H-Neurons"},{"@type":"BreadcrumbList","@id":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence\/h-neurons-a-major-breakthrough-in-reducing-llm-hallucinations#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Inicio","item":"https:\/\/www.humanlevel.com\/en"},{"@type":"ListItem","position":2,"name":"Blog","item":"https:\/\/www.humanlevel.com\/en\/blog"},{"@type":"ListItem","position":3,"name":"AI","item":"https:\/\/www.humanlevel.com\/en\/blog\/artificial-intelligence"},{"@type":"ListItem","position":4,"name":"H-Neurons: a major breakthrough in reducing hallucinations in LLMs"}]},{"@type":"WebSite","@id":"https:\/\/www.humanlevel.com\/en#website","url":"https:\/\/www.humanlevel.com\/en","name":"Human Level","description":"Web positioning and online marketing consultant Human Level","publisher":{"@id":"https:\/\/www.humanlevel.com\/en#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.humanlevel.com\/en?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.humanlevel.com\/en#organization","name":"Human Level","url":"https:\/\/www.humanlevel.com\/en","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.humanlevel.com\/en#\/schema\/logo\/image\/","url":"https:\/\/www.humanlevel.com\/wp-content\/uploads\/logohl25x3.png","contentUrl":"https:\/\/www.humanlevel.com\/wp-content\/uploads\/logohl25x3.png","width":600,"height":93,"caption":"Human Level"},"image":{"@id":"https:\/\/www.humanlevel.com\/en#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/human-level-communications","https:\/\/www.youtube.com\/user\/humanlevelcommunica","https:\/\/bsky.app\/profile\/humanlevel.bsky.social","https:\/\/instagram.com\/humanlevel"]},{"@type":"Person","@id":"https:\/\/www.humanlevel.com\/en#\/schema\/person\/11ad888926867985985a0210476bae94","name":"Ram\u00f3n Saquete","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.humanlevel.com\/wp-content\/uploads\/1x1-ramon-saquete-26-96x96.jpg","url":"https:\/\/www.humanlevel.com\/wp-content\/uploads\/1x1-ramon-saquete-26-96x96.jpg","contentUrl":"https:\/\/www.humanlevel.com\/wp-content\/uploads\/1x1-ramon-saquete-26-96x96.jpg","caption":"Ram\u00f3n Saquete"},"description":"Web Developer and Technical SEO Consultant at Human Level. He holds degrees in Computer Engineering and Technical Engineering in Computer Systems. He also earned a Higher Vocational Degree in Computer Applications Development and later obtained the Certificate of Pedagogical Aptitude (CAP). He is an expert in WPO and indexability.","sameAs":["https:\/\/es.linkedin.com\/in\/ramonsaquete","https:\/\/x.com\/daiatron"],"url":"https:\/\/www.humanlevel.com\/en\/author\/ramon"}]}},"_links":{"self":[{"href":"https:\/\/www.humanlevel.com\/en\/wp-json\/wp\/v2\/posts\/88431","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.humanlevel.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.humanlevel.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.humanlevel.com\/en\/wp-json\/wp\/v2\/users\/14"}],"replies":[{"embeddable":true,"href":"https:\/\/www.humanlevel.com\/en\/wp-json\/wp\/v2\/comments?post=88431"}],"version-history":[{"count":5,"href":"https:\/\/www.humanlevel.com\/en\/wp-json\/wp\/v2\/posts\/88431\/revisions"}],"predecessor-version":[{"id":88438,"href":"https:\/\/www.humanlevel.com\/en\/wp-json\/wp\/v2\/posts\/88431\/revisions\/88438"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.humanlevel.com\/en\/wp-json\/wp\/v2\/media\/88433"}],"wp:attachment":[{"href":"https:\/\/www.humanlevel.com\/en\/wp-json\/wp\/v2\/media?parent=88431"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.humanlevel.com\/en\/wp-json\/wp\/v2\/categories?post=88431"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.humanlevel.com\/en\/wp-json\/wp\/v2\/tags?post=88431"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}