Products
Resources
Company
Blog
Pricing
Contact us
Try for free
Our products
MakoraGenerate
CASE STUDIES
Code Translation
Performance Optimization
COMPANY
About
Careers
Contact Us
The latest industry news, interviews, technologies, and resources.
Introducing the MakoraGenerate CLI
Everyone's favourite kernel generation agent, now in your CLI!
Feb 18, 2026
Code generated by MakoraGenerate is wrong... or brilliant?
The code was correct. The problem wasn't.
Feb 12, 2026
We RL'd GPT-5 to Write Better Kernels
Pushing frontier model capabilities with reinforcement learning
Jan 15, 2026
Discovery & Mitigation of Reward Hacks in Automated Kernel Optimization
A systematic study of reward hacking, adversarial detection, and robust evaluation for LLM-optimized GPU kernels
Dec 16, 2025
Fast LLM-Generated Kimi Delta Attention Kernels
MakoraGenerate implements functional and fast KDA kernels with evolutionary search
Dec 3, 2025
Mako is now Makora
Same team. Same mission. Two new letters.
Sep 18, 2025
From Optimizing Kernels to Optimizing Benchmarks
Creating a representative subset of KernelBench to evaluate a long-running agent more efficiently
Aug 12, 2025
We Raised $8.5M to Make Peak GPU Performance Universally Accessible
Announcing Makora's seed round
Aug 6, 2025
MakoraGenerate Achieves 1.83x Performance over torch.compile on DeepSeek MOE Kernels
MakoraGenerate outperforms torch.compile when optimizing DeepSeek MOE Kernels
Jul 29, 2025
How MakoraGenerate Leverages PTX and Tensor Cores for Fast Matrix Multiplication
MakoraGenerate writes inline PTX to achieve near-optimial GEMM performance
Jul 22, 2025
15x Faster CUDA Kernel Compilation for MakoraGenerate
Optimizing the kernel generation pipeline through accelerated compilation
Jun 25, 2025
Introducing MakoraGenerate: AI-Powered GPU Kernel Generation in Under 60 Seconds
MakoraGenerate is an LLM-powered AI agent that writes GPU kernels
May 29, 2025
Unlocking AI Model Performance with Makora on Microsoft Azure
Makora improves the performance of vLLM and SGLang
Apr 2, 2025
Kernels Together Strong 🦧 Improving Performance using Multiple Kernel Providers
Achieve state-of-the-art latency on FLUX.1-schnell by leveraging multiple executor backends
Jan 29, 2025
1-Click deploy models on AMD MI300X
Easily deploy models on Makora
Oct 29, 2024
GPU go brrrrr, but at what cost?
Identifying the most price efficient AI inference accelerators
Book a Demo with an Engineer
Status
company
Legal
Terms of Service
Privacy Policy
Cookie Policy
Copyright © 2026 MakoRA. All rights reserved.