**7:51AM** **Kokoro:** I'm 3 weeks late to this, but this 82M parameter model (Kokoro) is unbelievably good. 82M parameters could easily run on your phone. Huggingface [page](https://huggingface.co/hexgrad/Kokoro-82M) (with examples) Reddit LocalLLaMA [thread](https://www.reddit.com/r/LocalLLaMA/comments/1hzuw4z/kokoro_1_on_tts_leaderboard/): Dockerized FastAPI endpoint: [github](https://github.com/remsky/Kokoro-FastAPI) **Edge TTS:** The [Edge TTS](https://huggingface.co/spaces/innoai/Edge-TTS-Text-to-Speech) model that Kokoro was apparently built on top of. On the HF TTS [leaderboard](https://huggingface.co/spaces/Pendrokar/TTS-Spaces-Arena), they are all fairly close. This one has a much larger number of voices. **Edge Browser TTS API**: I even learned today that Edge (the browser) has quite a good TTS feature built into the browser: ![[Pasted image 20250114080222.png]] People have written a Python library to call it: [github](https://github.com/rany2/edge-tts) 7:10PM I'm reading a chapter in **Think Again** by Adam Grant where he talks about Ron Berger, an exceptional teacher who encouraged his students to *think again* by providing critiques of each others' work and to encourage the creation of multiple drafts. > ... Ron had his students create blueprints for a house. When he required them to do at least four different drafts, other teachers warned him that younger students would become discouraged. Ron disagreed - he had already tested the concept with kindergarteners and first graders in art. Rather than asking them to simply draw a house, he announced, "We'll be doing four different versions awing of a house." The drafts were followed by a period of *rethinking* where the students reflected on each others' work. > The class didn't just critique projects. Each day they would discuss what excellence looked like. With each new project they updated their criteria. Along with rethinking their own work, they were learning to continually rethink their standards. This relates back to AI. What if the memory of the AI was used to take notes on the standards and how they should be updated? What if the AI was asked to create multiple versions of a response and self-criticize itself based on these criteria? How much better would the results become over time? This is worthy of investigation.