I would highly recommend the use of the package data.table over tibble or the basic data.frame if you are doing any type of modeling in R with larger datasets. Yes R has many data structures but knowing how to use data.table will blow your mind in term of efficiency. Matt and other contributors have built something extremely fast and flexible.<p>I get that R is not for everyone but used correctly it is a beast.<p>Now this is anecdotal, but we have in the insurance industry what we call on level premium calculators. It is basically a program that will rerate all policies with the current set of rates.<p>Our current R program can rate 41000 policies a second fully vectorized on a user laptop that has a an i5 from 2015.<p>In contrast, the previous SAS program could do 231 policies a minute on xeon 64 core processor from 2017.<p>For our workload and type of work, R has been a godsend.<p>Bonus, we can put what our data scientist develop in R directly in production. (after peer review, testing, etc, not different than any other production code)<p>Back when I started in 2005, we modeled in some proprietary software like Emblem, used Excel to build a first draft premium calculator, rebuilt the computation in SAS for the onlevel program and sent specs to IT to rebuilt the program again for production. All three had to produce the same results.<p>I've tried Python, Go, Rust, Julia. I'd say Python could be a good alternative but speed of data.table, RStudio IDE and ease of package management in R makes R an obvious choice for us. I believe Julia to be the future but so far the adoption rate in house has been low.