TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: As a data scientist, what should be in my toolkit in 2018?

341 pointsby mxgrover 7 years ago

38 comments

ms013over 7 years ago
Mathematics. Which branch of math is domain dependent. Stats come up everywhere. Graphs do too. In addition to baseline math, you really need to understand the problem domain and goals of the analysis.<p>Languages and libraries are just tools: knowing APIs doesn’t tell you at all how to solve a problem. They just give you things to throw at a problem. You need to know a few tools, but to be honest, they’re easy and you can go surprisingly far with few and relatively simple ones. Knowing how, when, and where to apply them is the hard part: and that often boils down to understanding the mathematics and domain you are working in.<p>And don’t over use viz. Pictures do effectively communicate, but often people visualize without understanding. The result is pretty pictures that eventually people realize communicate little effective domain insight. You’d be surprised that sometimes simple and ugly pictures communicate more insight than beautiful ones do.<p>My arsenal of tools: python, scipy&#x2F;matplotlib, Mathematica, Matlab, various specialized solvers (eg, CPLEX, Z3). Mathematical arsenal: stats, probability, calculus, Fourier analysis, graph theory, PDEs, combinatorics.<p>(Context: Been doing data work for decades, before it got its recent “data science” name.)
评论 #16420105 未加载
评论 #16419329 未加载
评论 #16419769 未加载
评论 #16419999 未加载
评论 #16425086 未加载
评论 #16420848 未加载
评论 #16430993 未加载
评论 #16424692 未加载
评论 #16423482 未加载
评论 #16420254 未加载
elsherbiniover 7 years ago
I&#x27;m a scientist (PhD student in microbiolgy) that works with lots of data. My data is on the order of hundreds of gigabytes (genome collections and other sequencing data) or megabytes (flat files).<p>I use the `tidyverse` from R[0] for everything people use `pandas` for. I think the syntax is soooo much more pleasant to use. It&#x27;s declarative and because of pipes and &quot;quosures&quot; is highly readable. Combined with the power of `broom`,fitting simple models to the data and working with the results is really nice. Add to that that `ggplot` (+ any sane styling defaults like `cowplot`) is the fastest way to iterate on data visualizations that I&#x27;ve ever found. &quot;R for Data Science&quot; [1] is great free resource for getting started.<p>Snakemake [2] is a pipeline tool that submits steps of the pipeline to a cluster and handles waiting for steps to finish before submitting dependent steps. As a result, my pipelines have very little boilerplate, they are self documented, and the cluster is abstracted away so the same pipeline can work on a cluster or a laptop.<p>[0] <a href="https:&#x2F;&#x2F;www.tidyverse.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.tidyverse.org&#x2F;</a><p>[1] <a href="http:&#x2F;&#x2F;r4ds.had.co.nz&#x2F;" rel="nofollow">http:&#x2F;&#x2F;r4ds.had.co.nz&#x2F;</a><p>[2] <a href="http:&#x2F;&#x2F;snakemake.readthedocs.io&#x2F;en&#x2F;stable&#x2F;" rel="nofollow">http:&#x2F;&#x2F;snakemake.readthedocs.io&#x2F;en&#x2F;stable&#x2F;</a>
评论 #16421295 未加载
Xcelerateover 7 years ago
As a data scientist who has been using the language for 5 years now, Julia is by far the best programming language for analyzing and processing data. That said, it’s common to find many Julia packages that are only half-maintained and don’t really work anymore. (I still don’t know how to connect to Postgres in a bug-free way using Julia.) And you’d be hard pressed to find teams of data scientists that use Julia. So in that sense, Python has much more mature and stable libraries, and it’s used everywhere. (But I really hope Julia overtakes it in the next couple of years because it’s such a well-designed language.)<p>Aside from programming languages, Jupyter notebooks and interactive workflows are invaluable, along with maintaining reproducible coding environments using Docker.<p>I think memorizing basic stats knowledge is not as useful as understanding deeper concepts like information theory, because most statistical tests can easily be performed nowadays using a library call. No one asks people to program in assembler to prove they can program anymore, so why would you memorize 30 different frequentist statistical tests and all of the assumptions that go along with each? Concepts like algorithmic complexity, minimum description length, and model selection are much more valuable.
评论 #16422088 未加载
评论 #16422000 未加载
chewxyover 7 years ago
My toolkit hasn&#x27;t changed since 2016:<p>- Jupyter + Pandas for exploratory work, quickly define a model<p>- Go (Gonum&#x2F;Gorgonia) for production quality work. (here&#x27;s a cheatsheet: <a href="https:&#x2F;&#x2F;www.cheatography.com&#x2F;chewxy&#x2F;cheat-sheets&#x2F;data-science-in-go-a&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.cheatography.com&#x2F;chewxy&#x2F;cheat-sheets&#x2F;data-scienc...</a> . Additional write-up on why Go: <a href="https:&#x2F;&#x2F;blog.chewxy.com&#x2F;2017&#x2F;11&#x2F;02&#x2F;go-for-data-science&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.chewxy.com&#x2F;2017&#x2F;11&#x2F;02&#x2F;go-for-data-science&#x2F;</a>)<p>I echo ms013&#x27;s comment very much. Everything is just tools. More important to understand the math and domain
评论 #16419867 未加载
trevzover 7 years ago
A couple of thoughts, off the top of my head:<p>Programming languages:<p><pre><code> - python (for general purpose programming) - R (for statistics) - bash (for cleaning up files) - SQL (for querying databases) </code></pre> Tools:<p><pre><code> - Pandas (for Python) - RStudio (for R) - Postgres (for SQL) - Excel (the format your customers will want ;-) ) </code></pre> Libraries:<p><pre><code> - SciPy (ecosystem for scientific computing) - NLTK (for natural language) - D3.js (for rendering results online)</code></pre>
评论 #16418881 未加载
评论 #16418848 未加载
评论 #16420376 未加载
评论 #16419139 未加载
评论 #16419696 未加载
xitriumover 7 years ago
If you care about quantifying uncertainty, knowing about Bayesian methods is a good idea I don&#x27;t see represented here yet. I care so much about uncertainty quantification and propagation that I work on the Stan project[0] which has an extremely complete manual (600+ pages) and many case studies illustrating different problems. Full Bayesian inference such as that provided by Stan&#x27;s Hamiltonian Monte Carlo inference algorithm is fairly computationally expensive so if you have more data than fits into RAM on a large server, you might be better served by some approximate methods (but note the required assumptions) like INLA[1].<p>[0] <a href="http:&#x2F;&#x2F;mc-stan.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;mc-stan.org&#x2F;</a> [1] <a href="http:&#x2F;&#x2F;www.r-inla.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.r-inla.org&#x2F;</a>
评论 #16422618 未加载
评论 #16446013 未加载
评论 #16424096 未加载
piqufohover 7 years ago
&gt; what tools should be in my arsenal<p>A sound understanding of mathematics, in particular statistics.<p>It&#x27;s amazing how many people will talk endlessly about the latest python&#x2F;R packages (with interactive charting!!!) who can&#x27;t explain the student&#x27;s t-test.
justuswover 7 years ago
Dealing with large data processing problems my main tools are as follows:<p>Libs: - Dask for distributed processing - matplotlib&#x2F;seaborn for graphing - IPython&#x2F;Jupyter for creating shareable data analyses<p>Environment: - S3 for data warehousing, I mainly use parquet files with pyarrow&#x2F;fastparquet - EC2 for Dask clustering - Ansible for EC2 setup<p>My problems usually can be solved by 2 memory-heavy EC2 instances. This setup works really well for me. Reading and writing intermediate results to S3 is blazing fast, especially when partitioning data by days if you work with time series.<p>Lots of difficult problems require custom mapping functions. I usually use them together with dask.dataframe.map_partitions, which is still extremely fast.<p>The most time-consuming activity is usually nunique&#x2F;unique counting across large time series. For this, Dask offers hyperloglog based approximations.<p>To sum it up, Dask alone makes all the difference for me!
trolliedover 7 years ago
What does &quot;Data Scientist&quot; actually mean these days? Does it mean &quot;Write 10 lines of Python or R, and not fully understand what it actually does&quot;? Or something else?<p>I just see the term flinged around so much recently, and applied to so many different roles, it has all become a tad blurred.<p>Maybe we need a Data Scientist to work out what a Data Scientist is?
评论 #16419389 未加载
评论 #16419292 未加载
schaunwheelerover 7 years ago
A lot of people in this thread are focusing on technical tools, which is normal for a discussion of this type, but I think that focus is misplaced. Most technical tools are easily learnable and are not the limiting factor is creating good data science products.<p><a href="https:&#x2F;&#x2F;towardsdatascience.com&#x2F;data-is-a-stakeholder-31bfdb650af0" rel="nofollow">https:&#x2F;&#x2F;towardsdatascience.com&#x2F;data-is-a-stakeholder-31bfdb6...</a><p>(Disclaimer: I wrote the post at the above link).<p>If you have a sound design you can still create a huge amount of value even with a very simple technical toolset. By the same token, you can have the biggest, baddest toolset in the world and still end up with a failed implementation if you have bad design.<p>There are resources out there for learning good design. This is a great introduction and points to many other good materials:<p><a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;Design-Essays-Computer-Scientist&#x2F;dp&#x2F;0201362988" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;Design-Essays-Computer-Scientist&#x2F;dp&#x2F;0...</a>
severoover 7 years ago
I&#x27;d say:<p>1. You need research skills that will allow you to ask the right questions, define the problem and put it in a mathematical framework.<p>2. Familiarity with math (which? depends on what you are doing) to the point where you can read articles that may have a solution to your problem and the ability to propose changes, creating proprietary algorithms.<p>3. Some scripting language (Python, R, w&#x2F;e)<p>4. (optional) Software Engineering skills. Can you put your model into production? Will your algorithm scale? Etc.
dxbydtover 7 years ago
&gt; What’s the fizzbuzz test for data scientists anyway?<p>Here&#x27;s 3 questions I was recently asked on a bunch of DS interviews in the Valley.<p>1. Probability of seeing a whale in the first hour is 80%. What&#x27;s the probability you&#x27;ll see one by the next hour ? Next two hours ?<p>2. In closely contested election with 2 parties, what&#x27;s the chance only one person will swing the vote, if there are n=5 voters ? n = 10 ? n = 100 ?<p>3. Difference between Adam and SGD.
ever1over 7 years ago
Python: Jupyter, pandas, numpy, scipy, scikit-learn<p>Numba for custom algorithms.<p>Dataiku (amazing tool for preprocessing and complex flows)<p>Amazon RDS (postgress), but thinking about redshift.<p>Spark<p>Tableau or plotly&#x2F;seaborn
closedover 7 years ago
I would think about which of these you see yourself doing more..<p>* statistical methods (more math)<p>* big, in-production model fitting (more python)<p>* quick, scrappy data analyses for internal use (more R)<p>For example, I would feel weird writing a robust web server in R, but it&#x27;s straightforward in python. On the other hand R&#x27;s shiny lets you put up quick, interactive web dashboards (that I wouldn&#x27;t trust in exposing to users).
greymanover 7 years ago
If you will work in some bigger company doing data analytics, you can also come across Tableau instead of Excel. Apart from SQL, if there is more data, you might want to use Bigquery or something similar.
kmax12over 7 years ago
One crucial skill you will need is feature engineering. Formal methods for it aren’t typically in data science classes. Still, it’s worth understanding in order to build ML applications. Unfortunately, there aren&#x27;t many available tools today, but I expect that to change this year.<p>Deep learning addresses it to some extent, but isn’t always the best choice if you don’t have image &#x2F; text data (eg tabular datasets from databases, log files) or a lot of training examples.<p>I’m the developer of a library called Featuretools (<a href="https:&#x2F;&#x2F;github.com&#x2F;Featuretools&#x2F;featuretools" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Featuretools&#x2F;featuretools</a>) which is a good tool to know for automated feature engineering. Our demos are also a useful resource to learn using some interesting datasets and problems: <a href="https:&#x2F;&#x2F;www.featuretools.com&#x2F;demos" rel="nofollow">https:&#x2F;&#x2F;www.featuretools.com&#x2F;demos</a>
fredleyover 7 years ago
IPython&#x2F;Jupyter, Pandas&#x2F;Numpy and Python will get you everywhere you need to go. Currently, until maybe Go gets decent DataFrame support, in terms of the total time to get to your solution, I&#x27;d be amazed if any other setup got you there quicker.
评论 #16419455 未加载
cwyersover 7 years ago
You can get a lot of mileage out of just using R, dplyr, ggplot2 and lm&#x2F;glm. OLS still performs well in a lot of problem spaces. Understanding your data is the key there, and a lot of exploratory visualization there will help a lot.
innovatherover 7 years ago
Hey everyone, I&#x27;m not a data scientist or a developer but I work with a lot of them. My company, Introspective Systems, recently released xGraph, an executable graph framework for intelligent and collaborative edge computing that solves big problems: those that have massive decision spaces, tons of data, are highly distributed, dynamically reconfigure, and need instantaneous decision making. It&#x27;s great for the modeling work that data scientists do. Comment if you want more info.
drejover 7 years ago
grep, cut, cat, tee, awk, sed, head, tail, g(un)zip, sort, uniq, split; curl; jq, python3
评论 #16423104 未加载
Jeff_Brownover 7 years ago
Static typing lets you catch errors before running the code.<p>Pattern matching helps you write code faster (that is, spending less human time).<p>Algebraic data types, particularly sum types, let you represent complicated kinds of data concisely.<p>Coconut is an extension of Python that offers all of those.<p>Test driven development also helps you write more correct code.
ChrisRackauckasover 7 years ago
A good understanding of calculus (probability), linear algebra, and your dataset&#x2F;domain. Anything else can be picked up as you need it. Oh, and test-driven development in some programming language, otherwise you can&#x27;t develop code you know is correct.
ak_yoover 7 years ago
Experimental design and observational causal inference would be excellent skills to have. Especially if you’re working with people who are asking you “why” questions, ML is helpful but isn’t going to cut it alone.
pentium10over 7 years ago
As 1TB is free for processing every month, using SQL 2011 standard + combined with Javascript UDFs, the winner solution is Google BigQuery for us, combined with Dataprep
bitLover 7 years ago
Spark + MLlib, Python + Pandas + NumPy + Keras + TensorFlow + PyTorch, R, SQL, top placement in some Kaggle competitions. This would get you long way.
评论 #16422764 未加载
larrykwgover 7 years ago
Nobody mentioned this yet: ETE: <a href="http:&#x2F;&#x2F;etetoolkit.org&#x2F;docs&#x2F;latest&#x2F;tutorial&#x2F;tutorial_trees.html" rel="nofollow">http:&#x2F;&#x2F;etetoolkit.org&#x2F;docs&#x2F;latest&#x2F;tutorial&#x2F;tutorial_trees.ht...</a><p>a fantastic tree visualization framework, its intended for phylogenetic analysis but can really be used for any type of tree&#x2F;hierarchical structure
nrjamesover 7 years ago
There are two &quot;poles&quot; in data science: math&#x2F;modeling and backend&#x2F;data-wrangling. Most of the time, the backend&#x2F;data-wrangling piece is a prerequisite to the math&#x2F;modeling. The vast majority of small and medium sized companies have not set up the systems they would need to support a data scientist who knows only math&#x2F;modeling. Depending on the domain, it&#x27;s not uncommon to find that a small&#x2F;medium company outsourced analytics to Firebase, Flurry, etc...<p>That&#x27;s fine, but when it comes time to create some customer segmentation models (or whatever) the data scientist they hire is going to need to know how to get the raw data. Questions become: how do I write code to talk to this API? How do I download 6 months of data, normalize it (if needed) and store it in a database? Those questions flow over into: how do I set up a hosted database with a cloud provider? What happens if I can&#x27;t use the COPY command to load in huge CSV files? How do I tee up 5 TB of data so that I can extract from it what I need to do the modeling? Then you start looking at BigQuery or Hadoop or Kafka or NiFi or Flink and you drown for a while in the Apache ecosystem.<p>If you take a job at a place that has those needs, be prepared to spend months or even up to a year to set up processes that allow you to access the data you need for modeling without going through a painful 75 step process each time.<p>Case in point: I recently worked on a project where the raw data came to me in 1500 different Excel workbooks, each of which had 2-7 worksheets. All of the data was in 25-30 different schemas, in Arabic, and the Arabic was encoded with different codepages, depending on whether it came from Jordan, Lebanon, Turkey, or Syria. My engagement was to do modeling with the data and, as is par for the course, it was an expectation that I would get the data organized. Well - to be more straightforward, the team with the data did not even know that the source format would present a problem. There were ~7500 worksheets, all riddled with spelling errors and the type of things that happen when humans interact with Excel: added&#x2F;deleted columns, blank rows with ID numbers, comments, different date formats, PII scattered everywhere, etc.<p>A data scientist&#x27;s toolkit needs to be flexible. If you have in mind that you want to do financial modeling with an airline or a bank, then you probably can focus on the mathematics and forget the data wrangling. If you want the flexibility to move around, you&#x27;re going to have to learn both. The only way to really learn data wrangling is through experience, though, since almost every project is fundamentally different. From that perspective, having a rock solid understanding of some key backend technologies is important. You&#x27;ll need to know Postgres (or some SQL database) up and down; how to install, configure, deploy, secure, access, query, tweak, delete, etc. You really need to know a very flexible programming language that comes with a lot of libraries for working with data of all formats. My choice there was Python. Not only do you need to know the language well, you need to know the common libraries you can use for wrangling data quickly and then also for modeling.<p>IMO, job descriptions for &quot;Data Scientist&quot; positions cover too broad of a range, often because the people hiring have just heard that they need to hire one. Think about where you want to work and&#x2F;or the type of business. Is it established? New? Do they have a history of modeling? Are you their first &quot;Data Scientist?&quot; All of these questions will help you determine where to focus first with your skill development.
评论 #16422283 未加载
in9over 7 years ago
I saw a simple tool somewhere a while ago (maybe a month or so ago) of a simple cli for data inspection in the terminal. It seemed very useful for inspecting data ssh&#x27;ed into a machine.<p>However, I can&#x27;t seem to recall the name. Has any one seen what I&#x27;m talking about?
anc84over 7 years ago
Any programming language that you are proficient in. A solid understanding how a computer works. Solid basis of statistics. Anything else is just sprinkles, trends and field-specific.
评论 #16421130 未加载
评论 #16419749 未加载
eggie5over 7 years ago
a lot of people using spark?
评论 #16419414 未加载
评论 #16420249 未加载
评论 #16420155 未加载
评论 #16419887 未加载
评论 #16418943 未加载
latenightcodingover 7 years ago
If you use Python: scikit-learn, Pandas, NumPy, Tensorflow or PyTorch<p>Language agnostic: XGBoost, LibLinear, Apache Arrow, MXNet
spdustinover 7 years ago
OpenRefine (openrefine.org) is definitely a handy (and automate-able) part of my data-cleansing workflow.
epsover 7 years ago
You probably mean &quot;data analyst&quot;.<p>&quot;Data scientist&quot; title would apply only if you are applying scientific method to discover new fact about natural world exclusively through data analysis (as opposed to observation and experiments).
评论 #16419498 未加载
评论 #16419167 未加载
sdfjklover 7 years ago
numpy, Jupyter (formerly IPython Notebook) and probably Mathematica anyways.
ameliusover 7 years ago
Any book recommendations?
ellisvover 7 years ago
Counting and dividing.
topologieabout 7 years ago
Random Matrix Theory.
komeover 7 years ago
Excel, VBA, SPSS ;)
评论 #16419179 未加载