Seseorang menemukan kelemahan zeroday di Linux dengan bantuan LLM OpenAI o3. Link: https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-2025-37899-a-remote-zeroday-vulnerability-in-the-linux-kernels-smb-implementation/
Links
Seseorang menemukan kelemahan zeroday di Linux dengan bantuan LLM OpenAI o3. Link: https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-2025-37899-a-remote-zeroday-vulnerability-in-the-linux-kernels-smb-implementation/
Links
Sumber data: video pergerakan pintu lift : https://www.youtube.com/watch?v=j_KKJnDDxQY
Video diubah menjadi image menggunakan software ffmpeg:
ffmpeg -i pintu-lift.mov pintu_%04d.png
Posisi pintu dianotasi secara manual menggunakan software Make Sense AI (https://www.makesense.ai/). Hasil anotasi diexport menggunakan format CSV.
Data dalam bentuk CSV kemudian diolah dengan Python & Jupyter Notebook .
Berikut hasil grafiknya:
AI sebagai alat bantu penulisan karya ilmiah. Sumber artikel: “Artificial intelligence-assisted academic writing: recommendations for ethical use ” Berikut ini abstraknya.
Generative artificial intelligence (AI) tools have been selectively adopted across the academic community to help researchers complete tasks in a more efficient manner. The widespread release of the Chat Generative Pre-trained Transformer (ChatGPT) platform in 2022 has made these tools more accessible to scholars around the world. Despite their tremendous potential, studies have uncovered that large language model (LLM)-based generative AI tools have issues with plagiarism, AI hallucinations, and inaccurate or fabricated references. This raises legitimate concern about the utility, accuracy, and integrity of AI when used to write academic manuscripts. Currently, there is little clear guidance for healthcare simulation scholars outlining the ways that generative AI could be used to legitimately support the production of academic literature. In this paper, we discuss how widely available, LLM-powered generative AI tools (e.g. ChatGPT) can help in the academic writing process. We first explore how academic publishers are positioning the use of generative AI tools and then describe potential issues with using these tools in the academic writing process. Finally, we discuss three categories of specific ways generative AI tools can be used in an ethically sound manner and offer four key principles that can help guide researchers to produce high-quality research outputs with the highest of academic integrity.
100-an mobil dikendalikan dengan algoritma AI berbasis RL untuk meningkatkan efisiensi dan mengurangi kemacetan.
Artikel Populer: Scaling Up Reinforcement Learning for Traffic Smoothing: A 100-AV Highway Deployment
Berikut ini beberapa literatur online untuk mempelajari bahasa C
Belajar deep learning nggak terlalu rumit. Modal dasarnya mengerti matematika SMA, komputer juga nggak harus yang mahal-mahal. Alamat kursusnya: “Practical Deep Learning“
Ada beberapa bahan yang biasa dipakai untuk membuat alat masak, yaitu:
Stainless steel kualitasnya sangat baik. Panas dari kompor mudah menyebar merata. Dapat dipakai untuk berbagai macam cara. Tidak lengket jika sudah di’seasoning’. Perawatan mudah. Material dari wajan relatif tidak masuk ke makanan, kecuali chromium yang bisa bereaksi dengan makanan kalau dipakai memasak yang asam.
Wajan besi ada yang menggunakan besi saja, ada yang dilapis enamel. Besi bereaksi jika dipakai memasak makanan yang mengandung asam. Besi dari wajan dapat masuk ke makanan.
Wajan enamel adalah wajan besi yang dilapis dengan enamel.
Gelas bagus, namun mudah pecah. Konduktivitas panas kurang bagus dibandingkan wajan logam.
Tembaga sangat mudah menghantar panas. Kelemahannya adalah tembaga cukup reaktif dengan makanan.
Keramik tidak lengket. Kurang baik untuk panas tinggi. Tidak reaktif. Namun kadang diberi lapisan anti lengket teflon.
Teflon berpotensi melepas mikroplastik ke alam
Aluminium bersifat racun.
Carbon steel: lebih mudah berkarat dibandingkan baja stainless steel
Titanium lebih kuat dari baja dan lebih ringan. Namun harganya mahal sekali.
Sumber: https://github.com/bytedance/pasa
Q: How did DeepSeek get around export restrictions? A: They didn’t. They just tinkered around with their chips to make sure they handled memory as efficiently as possibly. They lucked out, and their perfectly optimized low-level code wasn’t actually held back by chip capacity.
Q: How did DeepSeek train so much more efficiently? A: They used the formulas below to “predict” which tokens the model would activate. Then, they only trained these tokens. They need 95% fewer GPUs than Meta because for each token, they only trained 5% of their parameters.
Q: How is DeepSeek’s inference so much cheaper? A: They compressed the KV cache. (This was a breakthrough they made a while ago.)
Q: How did they replicate o1? A: Reinforcement learning. Take complicated questions that can be easily verified (either math or code). Update the model if correct.
Reference: https://x.com/wordgrammer/status/1883712727073607859