Simulasi Pendaratan Starship dari SpaceX

Proses instalasi

conda install pip
pip install jupyter
pip install jupyterthemes
pip install matplotlib
pip install casidi

Referensi

IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation

Abstract

A benchmark provides an ecosystem to measure the advancement of models with standard datasets and automatic and human evaluation metrics. We introduce IndoNLG, the first such benchmark for the Indonesian language for natural language generation (NLG). It covers six tasks: summarization, question answering, open chitchat, as well as three different language-pairs of machine translation tasks. We provide a vast and clean pre-training corpus of Indonesian, Sundanese, and Javanese datasets called Indo4B-Plus, which is used to train our pre-trained NLG model, IndoBART. We evaluate the effectiveness and efficiency of IndoBART by conducting extensive evaluation on all IndoNLG tasks. Our findings show that IndoBART achieves competitive performance on Indonesian tasks with five times fewer parameters compared to the largest multilingual model in our benchmark, mBART-LARGE (Liu et al., 2020), and an almost 4x and 2.5x faster inference time on the CPU and GPU respectively. We additionally demonstrate the ability of IndoBART to learn Javanese and Sundanese, and it achieves decent performance on machine translation tasks.

Link: https://arxiv.org/abs/2104.08200

Free Ebook: Machine Learning – A First Course for Engineers and Scientists

An interesting free ebook

When we developed the course Statistical Machine Learning for engineering students at Uppsala University, we found no appropriate textbook, so we ended up writing our own. It will be published by Cambridge University Press in 2021.

Andreas Lindholm, Niklas Wahlström, Fredrik Lindsten, and Thomas B. Schön

A draft of the book is available below. We will keep a PDF of the book freely available also after its publication.

Latest draft of the book (older versions >>)

Table of Contents

  1. Introduction
    • The machine learning problem
    • Machine learning concepts via examples
    • About this book
  2. Supervised machine learning: a first approach
    • Supervised machine learning
    • A distance-based method: k-NN
    • A rule-based method: Decision trees
  3. Basic parametric models for regression and classification
    • Linear regression
    • Classification and logistic regression
    • Polynomial regression and regularization
    • Generalized linear models
  4. Understanding, evaluating and improving the performance
    • Expected new data error: performance in production
    • Estimating the expected new data error
    • The training error–generalization gap decomposition
    • The bias-variance decomposition
    • Additional tools for evaluating binary classifiers
  5. Learning parametric models
    • Principles pf parametric modelling
    • Loss functions and likelihood-based models
    • Regularization
    • Parameter optimization
    • Optimization with large datasets
    • Hyperparameter optimization
  6. Neural networks and deep learning
    • The neural network model
    • Training a neural network
    • Convolutional neural networks
    • Dropout
  7. Ensemble methods: Bagging and boosting
    • Bagging
    • Random forests
    • Boosting and AdaBoost
    • Gradient boosting
  8. Nonlinear input transformations and kernels
    • Creating features by nonlinear input transformations
    • Kernel ridge regdression
    • Support vector regression
    • Kernel theory
    • Support vector classification
  9. The Bayesian approach and Gaussian processes
  10. Generative models and learning from unlabeled data
    • The Gaussian mixture model and discriminant analysis
    • Cluster analysis
    • Deep generative models
    • Representation learning and dimensionality reduction
  11. User aspects of machine learning
    • Defining the machine learning problem
    • Improving a machine learning model
    • What if we cannot collect more data?
    • Practical data issues
    • Can I trust my machine learning model?
  12. Ethics in machine learning (by David Sumpter)
    • Fairness and error functions
    • Misleading claims about performance
    • Limitations of training data

Speeding Up Reinforcement Learning with a New Physics Simulation Engine

Source: https://ai.googleblog.com/2021/07/speeding-up-reinforcement-learning-with.html

Reinforcement learning (RL) is a popular method for teaching robots to navigate and manipulate the physical world, which itself can be simplified and expressed as interactions between rigid bodies1 (i.e., solid physical objects that do not deform when a force is applied to them). In order to facilitate the collection of training data in a practical amount of time, RL usually leverages simulation, where approximations of any number of complex objects are composed of many rigid bodies connected by joints and powered by actuators. But this poses a challenge: it frequently takes millions to billions of simulation frames for an RL agent to become proficient at even simple tasks, such as walking, using tools, or assembling toy blocks.

NOAA Fisheries Steller Sea Lion Population Count

Original Competition: https://www.kaggle.com/c/noaa-fisheries-steller-sea-lion-population-count

Best solutions:

Related Articles

Papers

Pengukuran Kualitas Ventilasi Dengan Sensor CO2

Belgia sudah mewajibkan pengukuran CO2 di tempat umum. [https://www.info-coronavirus.be/en/ventilation/]

Penelitian menganjurkan CO2 sebagai sarana mengukur resiko penularan COVID-19

Referensi

Popular Augmentation Library

Popular Augmentation library:

Augly

Augly (https://github.com/facebookresearch/AugLy) and Albumentations (https://github.com/albumentations-team/albumentations)

Augly example

Albumentations

Albumentations example:

Albumentations example

References