{"id":4029,"date":"2025-08-31T11:59:00","date_gmt":"2025-08-31T06:59:00","guid":{"rendered":"https:\/\/www.edopedia.com\/blog\/?p=4029"},"modified":"2025-10-21T06:30:33","modified_gmt":"2025-10-21T01:30:33","slug":"minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks","status":"publish","type":"post","link":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/","title":{"rendered":"MiniMax-M1 vs GPT-4o vs Claude 3 Opus vs LLaMA\u00a03 Benchmarks"},"content":{"rendered":"\n<p>MiniMax-M1 is a new open-weight large language model (456\u202fB parameters, ~46\u202fB active) built with hybrid mixture-of-experts and a \u201clightning attention\u201d mechanism. It natively supports up to&nbsp;<strong>1\u202fmillion token<\/strong>&nbsp;contexts. MiniMax-AI trained M1 for complex reasoning (math, logic, coding, long-context tasks) via reinforcement learning. In this analysis we report MiniMax-M1\u2019s scores on key benchmarks (MMLU, GSM8K, HellaSwag, ARC, HumanEval, BBH, DROP) and compare against OpenAI\u2019s GPT-4\/GPT-4o, Anthropic\u2019s Claude 3 Opus, and Meta\u2019s LLaMA 3 (70B).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">MMLU (Massive Multitask Language Understanding)<\/h2>\n\n\n\n<p>MMLU measures general knowledge across 57 academic and professional subjects (multiple-choice accuracy). MiniMax-M1-80K scored&nbsp;<strong>81.1%<\/strong>&nbsp;on MMLU-Pro (an extended version of MMLU). This is below the top-tier models: GPT-4 achieves roughly&nbsp;<strong>85\u201386%<\/strong>, Claude 3 Opus around&nbsp;<strong>85%<\/strong>&nbsp;(noting evaluation differences), and LLaMA&nbsp;3 (70B) is reported near&nbsp;<strong>86%<\/strong>&nbsp;on standard MMLU. The table below compares MiniMax-M1 and peers:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Model<\/th><th>MMLU Accuracy<\/th><\/tr><\/thead><tbody><tr><td>MiniMax-M1-80K<\/td><td>81.1\u202f%<\/td><\/tr><tr><td>GPT-4 \/ GPT-4o<\/td><td>\u224886\u202f%<\/td><\/tr><tr><td>Claude 3 Opus<\/td><td>\u224885\u202f%<\/td><\/tr><tr><td>LLaMA&nbsp;3 (70B)<\/td><td>\u224886\u202f%<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Observations:<\/strong>&nbsp;MiniMax-M1\u2019s MMLU score is solid but modestly below the latest state-of-the-art. GPT-4 (86.4%) and Claude 3 Opus lead with mid-80s accuracy, reflecting their broad knowledge. The gap (~5 points) suggests MiniMax-M1 is competitive but not yet matching the very best models on broad knowledge tasks. In practice, this means MiniMax can handle standard academic questions reasonably well, but GPT-4\/Claude retain an edge in diverse subjects.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">GSM8K (Grade-School Math)<\/h2>\n\n\n\n<p>GSM8K is a dataset of 8,500 grade-school math word problems requiring multi-step arithmetic. It tests chain-of-thought reasoning. MiniMax-M1 has&nbsp;<strong>no official published score<\/strong>&nbsp;on GSM8K, so we note peer results for context. GPT-4 scores about&nbsp;<strong>92%<\/strong>&nbsp;accuracy on GSM8K (via few-shot CoT prompting). Anthropic Claude 3 Opus reaches about&nbsp;<strong>95%<\/strong>&nbsp;(zero-shot), making it state-of-the-art. Meta LLaMA&nbsp;3 results are not widely reported, but prior LLaMA-2 (70B) was ~57%. Our comparison:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Model<\/th><th>GSM8K Accuracy<\/th><\/tr><\/thead><tbody><tr><td>MiniMax-M1-80K<\/td><td>N\/A (unreported)<\/td><\/tr><tr><td>GPT-4 \/ GPT-4o<\/td><td>92\u202f%<\/td><\/tr><tr><td>Claude 3 Opus<\/td><td>95\u202f%<\/td><\/tr><tr><td>LLaMA&nbsp;3 (70B)<\/td><td>\u2013<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Observations:<\/strong>&nbsp;GSM8K is a challenging math benchmark. GPT-4 and Claude 3 achieve very high accuracy (above 90%) with chain-of-thought prompting. Without MiniMax-M1 results, we can only note that if MiniMax matched Claude\u2019s 95%, it would set a new bar; otherwise, it likely trails slightly. MiniMax\u2019s emphasis on long-range context may mean it was not primarily tuned for arithmetic benchmarks.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">HellaSwag (Commonsense Reasoning)<\/h2>\n\n\n\n<p>HellaSwag involves choosing the most sensible sentence completion in everyday scenarios (commonsense). Top models have essentially saturated this benchmark. GPT-4 achieves about&nbsp;<strong>95%<\/strong>&nbsp;accuracy (10-shot) on HellaSwag. Claude 3 Opus scores&nbsp;<strong>95.4%<\/strong>. No MiniMax-M1 number is available, but it likely would be similar to other high-end models if evaluated. For comparison:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Model<\/th><th>HellaSwag Accuracy<\/th><\/tr><\/thead><tbody><tr><td>MiniMax-M1-80K<\/td><td>N\/A<\/td><\/tr><tr><td>GPT-4 \/ GPT-4o<\/td><td>\u224895\u202f%<\/td><\/tr><tr><td>Claude 3 Opus<\/td><td>95.4\u202f%<\/td><\/tr><tr><td>LLaMA&nbsp;3 (70B)<\/td><td>\u2013<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Observations:<\/strong>&nbsp;On HellaSwag (commonsense plausibility), all top models score in the mid-90s. Claude 3\u2019s 95.4% and GPT-4\u2019s ~95% indicate near-human performance. We have no MiniMax data, but if it were released, we\u2019d expect it to be in a similar range given its architecture. In practice, HellaSwag is mostly solved by frontier models, so MiniMax-M1 would need highly optimized prompting to make a difference.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">ARC (AI2 Reasoning Challenge)<\/h2>\n\n\n\n<p>ARC Challenge consists of hard elementary science questions (multiple-choice). GPT-4 reportedly achieves&nbsp;<strong>~96%<\/strong>&nbsp;on ARC-Challenge with few-shot chain-of-thought, dramatically higher than earlier models. (By contrast, older GPT-4 variants scored ~21% on the public leaderboard without CoT.) MiniMax-M1\u2019s ARC performance is not published. Claude 3 Opus\u2019s ARC score is not public. For illustration:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Model<\/th><th>ARC-Challenge Accuracy<\/th><\/tr><\/thead><tbody><tr><td>MiniMax-M1-80K<\/td><td>N\/A<\/td><\/tr><tr><td>GPT-4 \/ GPT-4o<\/td><td>~96\u202f%<\/td><\/tr><tr><td>Claude 3 Opus<\/td><td>\u2013<\/td><\/tr><tr><td>LLaMA&nbsp;3 (70B)<\/td><td>\u2013<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Observations:<\/strong>&nbsp;ARC tests commonsense scientific reasoning under time constraints. GPT-4\u2019s ~96% (few-shot) means it answers almost all questions correctly with a chain-of-thought strategy. Without MiniMax or Claude ARC numbers, we simply note that GPT-4 leads by a wide margin. If evaluated, MiniMax-M1 might rely on its extended reasoning, but its relative rank is unknown. In practice, performance on ARC is now extremely high for best models, making it less discriminative for ranking them.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">HumanEval (Code Generation)<\/h2>\n\n\n\n<p>HumanEval measures code-generation (Python) correctness by pass@1. MiniMax-M1\u2019s coding score is not reported. GPT-4 scores around&nbsp;<strong>88\u201391%<\/strong>&nbsp;on HumanEval (depending on evaluation setting). Anthropic Claude 3 Opus scores&nbsp;<strong>84.9%<\/strong>. We compare as follows:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Model<\/th><th>HumanEval Pass@1<\/th><\/tr><\/thead><tbody><tr><td>MiniMax-M1-80K<\/td><td>N\/A<\/td><\/tr><tr><td>GPT-4 \/ GPT-4o<\/td><td>\u224890\u202f%<\/td><\/tr><tr><td>Claude 3 Opus<\/td><td>84.9\u202f%<\/td><\/tr><tr><td>LLaMA&nbsp;3 (70B)<\/td><td>\u2013<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Observations:<\/strong>&nbsp;GPT-4 excels at code tasks, achieving near 90% pass rate. Claude 3\u2019s 84.9% is strong but lower. MiniMax-M1 is aimed at reasoning, and no official HumanEval result is given; its performance on coding likely lags these specialized models. For practitioners, GPT-4\/Claude remain better for code generation, while MiniMax\u2019s advantages lie elsewhere (e.g. long-context reasoning).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">BBH (BIG-Bench Hard)<\/h2>\n\n\n\n<p>BBH is a suite of 23 very difficult tasks from BIG-Bench. MiniMax-M1\u2019s BBH score is not available. Claude 3 Opus scores&nbsp;<strong>86.8%<\/strong>&nbsp;on BBH. GPT-4 (latest) also performs in the mid-80s on BBH. For comparison:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Model<\/th><th>BBH Accuracy<\/th><\/tr><\/thead><tbody><tr><td>MiniMax-M1-80K<\/td><td>N\/A<\/td><\/tr><tr><td>GPT-4 \/ GPT-4o<\/td><td>\u224880\u202f%<\/td><\/tr><tr><td>Claude 3 Opus<\/td><td>86.8\u202f%<\/td><\/tr><tr><td>LLaMA&nbsp;3 (70B)<\/td><td>\u2013<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Observations:<\/strong>&nbsp;BBH aggregates the hardest tasks; Claude 3\u2019s 86.8% is state-of-the-art. GPT-4\u2019s score (depending on version) is reported to be slightly lower. Without MiniMax data, we note that GPT-4\/Claude again dominate. MiniMax-M1 may not have been explicitly optimized for these niche tasks, so its performance is unknown. The takeaway is that BBH is currently solved mostly by large-scale frontier models.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">DROP (Discrete Reasoning Over Paragraphs)<\/h2>\n\n\n\n<p>DROP tests numerical reasoning on passages (F1 score). Claude 3 Opus achieves an F1 of&nbsp;<strong>93.1<\/strong>. GPT-4 likewise scores in the mid-90s on DROP. MiniMax-M1\u2019s DROP score has not been published. Comparison:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Model<\/th><th>DROP F1<\/th><\/tr><\/thead><tbody><tr><td>MiniMax-M1-80K<\/td><td>N\/A<\/td><\/tr><tr><td>GPT-4 \/ GPT-4o<\/td><td>\u224895<\/td><\/tr><tr><td>Claude 3 Opus<\/td><td>93.1<\/td><\/tr><tr><td>LLaMA&nbsp;3 (70B)<\/td><td>\u2013<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Observations:<\/strong>&nbsp;Both GPT-4 and Claude 3 handle DROP nearly perfectly, reflecting strong reading and arithmetic skills. Claude\u2019s 93.1 F1 is top-tier. MiniMax-M1\u2019s focus is broad reasoning, and no DROP result is given. In practice, MiniMax\u2019s strengths in long-context reasoning may not directly impact DROP performance (which is short passages).<\/p>\n\n\n\n<p><strong>Summary:<\/strong>&nbsp;MiniMax-M1 shows very strong performance on many benchmarks, especially those involving long context and complex reasoning (as noted in its tech report). However, on standard leaderboards like MMLU or GSM8K, it trails slightly behind GPT-4\/GPT-4o and Claude 3 Opus. The tables above illustrate that GPT-4 and Claude generally lead in accuracy. Key insights:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>MiniMax-M1\u2019s MMLU score (~81.1%) is good but several points below GPT-4\/Claude.<\/li>\n\n\n\n<li>On math (GSM8K), GPT-4 and Claude achieve 92\u201395%; MiniMax-M1\u2019s result is unreported but unlikely to dramatically exceed these.<\/li>\n\n\n\n<li>In commonsense tasks (HellaSwag, ARC), GPT-4\/Claude are at human-level performance; MiniMax-M1 presumably would be similar if evaluated.<\/li>\n\n\n\n<li>For coding (HumanEval) and hardest tasks (BBH), GPT-4\/Claude again lead; MiniMax-M1 has no published numbers, suggesting it was optimized elsewhere.<\/li>\n<\/ul>\n\n\n\n<p>Overall, MiniMax-M1 is a competitive open model, but GPT-4 and Claude 3 Opus remain the strongest performers on these standard benchmarks. These comparisons help ML practitioners understand MiniMax\u2019s relative strengths: it excels at long-context and hybrid-attention tasks, while GPT-4\/Claude retain an edge on many conventional academic and reasoning exams.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>MiniMax-M1 is a new open-weight large language model (456\u202fB parameters, ~46\u202fB active) built with hybrid mixture-of-experts and a \u201clightning attention\u201d mechanism. It natively supports up to&nbsp;1\u202fmillion token&nbsp;contexts. MiniMax-AI trained M1 for complex reasoning (math, logic, coding, long-context tasks) via reinforcement learning. In this analysis we report MiniMax-M1\u2019s scores on key benchmarks (MMLU, GSM8K, HellaSwag, ARC, &#8230; <a title=\"MiniMax-M1 vs GPT-4o vs Claude 3 Opus vs LLaMA\u00a03 Benchmarks\" class=\"read-more\" href=\"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/\" aria-label=\"Read more about MiniMax-M1 vs GPT-4o vs Claude 3 Opus vs LLaMA\u00a03 Benchmarks\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":1762,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[124],"tags":[],"class_list":["post-4029","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-comparisons"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>MiniMax-M1 vs GPT-4o vs Claude 3 Opus vs LLaMA\u00a03 Benchmarks<\/title>\n<meta name=\"description\" content=\"MiniMax-M1 is a new open-weight large language model (456\u202fB parameters, ~46\u202fB active) built with hybrid mixture-of-experts and a \u201clightning attention\u201d\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"MiniMax-M1 vs GPT-4o vs Claude 3 Opus vs LLaMA\u00a03 Benchmarks\" \/>\n<meta property=\"og:description\" content=\"MiniMax-M1 is a new open-weight large language model (456\u202fB parameters, ~46\u202fB active) built with hybrid mixture-of-experts and a \u201clightning attention\u201d\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/\" \/>\n<meta property=\"og:site_name\" content=\"Edopedia\" \/>\n<meta property=\"article:author\" content=\"trulyfurqan\" \/>\n<meta property=\"article:published_time\" content=\"2025-08-31T06:59:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-10-21T01:30:33+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.edopedia.com\/blog\/wp-content\/uploads\/2022\/02\/default_featured_image.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"880\" \/>\n\t<meta property=\"og:image:height\" content=\"495\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Furqan\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Furqan\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"MiniMax-M1 vs GPT-4o vs Claude 3 Opus vs LLaMA\u00a03 Benchmarks","description":"MiniMax-M1 is a new open-weight large language model (456\u202fB parameters, ~46\u202fB active) built with hybrid mixture-of-experts and a \u201clightning attention\u201d","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/","og_locale":"en_US","og_type":"article","og_title":"MiniMax-M1 vs GPT-4o vs Claude 3 Opus vs LLaMA\u00a03 Benchmarks","og_description":"MiniMax-M1 is a new open-weight large language model (456\u202fB parameters, ~46\u202fB active) built with hybrid mixture-of-experts and a \u201clightning attention\u201d","og_url":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/","og_site_name":"Edopedia","article_author":"trulyfurqan","article_published_time":"2025-08-31T06:59:00+00:00","article_modified_time":"2025-10-21T01:30:33+00:00","og_image":[{"width":880,"height":495,"url":"https:\/\/www.edopedia.com\/blog\/wp-content\/uploads\/2022\/02\/default_featured_image.jpg","type":"image\/jpeg"}],"author":"Furqan","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Furqan","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/#article","isPartOf":{"@id":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/"},"author":{"name":"Furqan","@id":"https:\/\/www.edopedia.com\/blog\/#\/schema\/person\/3951cb19e3aa56df09e408c98aa02339"},"headline":"MiniMax-M1 vs GPT-4o vs Claude 3 Opus vs LLaMA\u00a03 Benchmarks","datePublished":"2025-08-31T06:59:00+00:00","dateModified":"2025-10-21T01:30:33+00:00","mainEntityOfPage":{"@id":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/"},"wordCount":1185,"commentCount":0,"publisher":{"@id":"https:\/\/www.edopedia.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/#primaryimage"},"thumbnailUrl":"https:\/\/www.edopedia.com\/blog\/wp-content\/uploads\/2022\/02\/default_featured_image.jpg","articleSection":["Comparisons"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/","url":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/","name":"MiniMax-M1 vs GPT-4o vs Claude 3 Opus vs LLaMA\u00a03 Benchmarks","isPartOf":{"@id":"https:\/\/www.edopedia.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/#primaryimage"},"image":{"@id":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/#primaryimage"},"thumbnailUrl":"https:\/\/www.edopedia.com\/blog\/wp-content\/uploads\/2022\/02\/default_featured_image.jpg","datePublished":"2025-08-31T06:59:00+00:00","dateModified":"2025-10-21T01:30:33+00:00","description":"MiniMax-M1 is a new open-weight large language model (456\u202fB parameters, ~46\u202fB active) built with hybrid mixture-of-experts and a \u201clightning attention\u201d","breadcrumb":{"@id":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/#primaryimage","url":"https:\/\/www.edopedia.com\/blog\/wp-content\/uploads\/2022\/02\/default_featured_image.jpg","contentUrl":"https:\/\/www.edopedia.com\/blog\/wp-content\/uploads\/2022\/02\/default_featured_image.jpg","width":880,"height":495,"caption":"Default Featured Image"},{"@type":"BreadcrumbList","@id":"https:\/\/www.edopedia.com\/blog\/minimax-m1-vs-gpt-4o-vs-claude-3-opus-vs-llama-3-benchmarks\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.edopedia.com\/blog\/"},{"@type":"ListItem","position":2,"name":"MiniMax-M1 vs GPT-4o vs Claude 3 Opus vs LLaMA\u00a03 Benchmarks"}]},{"@type":"WebSite","@id":"https:\/\/www.edopedia.com\/blog\/#website","url":"https:\/\/www.edopedia.com\/blog\/","name":"Edopedia","description":"Coding\/Programming Blog","publisher":{"@id":"https:\/\/www.edopedia.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.edopedia.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.edopedia.com\/blog\/#organization","name":"Edopedia","url":"https:\/\/www.edopedia.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.edopedia.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.edopedia.com\/blog\/wp-content\/uploads\/2017\/10\/edopedia_icon_text_10.jpg","contentUrl":"https:\/\/www.edopedia.com\/blog\/wp-content\/uploads\/2017\/10\/edopedia_icon_text_10.jpg","width":400,"height":100,"caption":"Edopedia"},"image":{"@id":"https:\/\/www.edopedia.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.edopedia.com\/blog\/#\/schema\/person\/3951cb19e3aa56df09e408c98aa02339","name":"Furqan","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/e5e68aef3ad8f0b83d56f4953c512c8e57bd2e6dc64daec33b5d0495d9058f51?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/e5e68aef3ad8f0b83d56f4953c512c8e57bd2e6dc64daec33b5d0495d9058f51?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e5e68aef3ad8f0b83d56f4953c512c8e57bd2e6dc64daec33b5d0495d9058f51?s=96&d=mm&r=g","caption":"Furqan"},"description":"Well. I've been working for the past three years as a web designer and developer. I have successfully created websites for small to medium sized companies as part of my freelance career. During that time I've also completed my bachelor's in Information Technology.","sameAs":["http:\/\/www.edopedia.com\/blog\/","trulyfurqan"],"url":"https:\/\/www.edopedia.com\/blog\/author\/furqan\/"}]}},"_links":{"self":[{"href":"https:\/\/www.edopedia.com\/blog\/wp-json\/wp\/v2\/posts\/4029","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.edopedia.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.edopedia.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.edopedia.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.edopedia.com\/blog\/wp-json\/wp\/v2\/comments?post=4029"}],"version-history":[{"count":1,"href":"https:\/\/www.edopedia.com\/blog\/wp-json\/wp\/v2\/posts\/4029\/revisions"}],"predecessor-version":[{"id":4030,"href":"https:\/\/www.edopedia.com\/blog\/wp-json\/wp\/v2\/posts\/4029\/revisions\/4030"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.edopedia.com\/blog\/wp-json\/wp\/v2\/media\/1762"}],"wp:attachment":[{"href":"https:\/\/www.edopedia.com\/blog\/wp-json\/wp\/v2\/media?parent=4029"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.edopedia.com\/blog\/wp-json\/wp\/v2\/categories?post=4029"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.edopedia.com\/blog\/wp-json\/wp\/v2\/tags?post=4029"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}