TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: What useful internal tools or libraries have you built in your company?

59 pointsby crack-the-codeabout 6 years ago
I'm curious to know what kind of tools, scripts, automation, libraries, etc. you all have built to help boost the productivity of your team(s).

18 comments

z3ugmaabout 6 years ago
At a company of 10,000, it&#x27;s important to know the 100 people you&#x27;ll be working closest to. I built a &quot;memory&quot; game as a webapp which matched the faces of 4 people on your team to a single name and a list of self-assigned skills. You click on a photo to match a name to a face, and once you guess right a new set loads. You can randomly click through your whole team and learn a lot about them in just 15 minutes or so.<p>The whole thing was built with read-only SQL scripts, Flask, and some JQuery.
评论 #19674509 未加载
评论 #19676949 未加载
reacharavindhabout 6 years ago
At my current job, I saw a lab technician work manually with Excel sheets entering sample IDs and then using a website where he&#x27;d copy&#x2F;paste the sample ID into to get a bar code, and then print it out to stick on the box.<p>I wrote a Python script that uses openpyxl module to read His Excel docs, and report lab module to generate bar codes in a PDF document with appropriate spacers so that he can simply print it out, and stick them in boxes.<p>He is happy and so am I that I could save his time. It only took me 20 mins to write this script.
评论 #19682835 未加载
digitalsushiabout 6 years ago
I wrote a shim layer for all our packer&#x2F;vagrant OS workflows to operate against an unreliable vsphere ecosystem. It exposes a suite of posix sh functions for sysadmins&#x2F;developers to easily operate against this very unreliable environment. It adds automatic logging, retrying, and adjustable verbosity because of the numerous ways this environment randomly fails.<p>People can just . source the file in from a shared location and often find that their scripts just start to work better. It&#x27;s not perfect, nothing&#x27;s perfect. It&#x27;s not even that clever. But when builds and deploys start to work twice as good, even with the remaining failures, well, that&#x27;s something. None of the 65000 employees using it will ever know, but it feels good to know we were dropping 2&#x2F;3 orders and now we&#x27;re dropping 1&#x2F;3.
MediumDabout 6 years ago
Back at my old job, people would have trouble knowing what to do when on-call.<p>I built a slack app that would keep track of my team&#x27;s pages and what people did to respond to them. As new pages were triggered, the bot would show the on-call person what previous people had done to resolve the page.
评论 #19681823 未加载
i5h4nabout 6 years ago
In my previous organization, we dealt with a legacy enterprise software product which had accumulated a massive bug history over multiple years and sub-products. All being tracked by an in-house bug tracking product.<p>Lots of issues we used to see being reported were either already fixed or had been config issues. In order to (somewhat) quickly find existing fixes&#x2F;comments for issues that we get reported, I built a search tool (webapp) which scraped the bugs and comments in those bugs in order to find any relevant information around your query and listed them in order of matching probability.<p>Was a pretty cool learning experience to build that out. I had deployed it on a personal remote VM that devs were granted, have no idea if people are still using it.
tehlikeabout 6 years ago
I built a hacked up experimentation framework for clientside flags, that boosted my teams speed and confidence quite a bit. Hacked up because it didnt use existing serverside mechanism for a bunch of reasons.<p>Used the same experimentation framework for automated javascript binary releases, so at some point i could release 5 times a week, with no issues. Now i left the team, people took that on and continuing like tic toc.<p>Showed them how to use powerdrill (data drilling, analysis tool), and taught them metrics. It is surpising how little people care what their work is really about eventually, and bringing them data driven mindset gave even more productivity boost.
Adamantcheeseabout 6 years ago
At my last job we had to do builds constantly and put it on hardware, which was annoying because builds took 15 minutes and putting it on the hardware took another 10. Couldn&#x27;t solve the latter because it wasn&#x27;t in our domain, but the first half I managed to &quot;multithread&quot; a build using a really hacky batch script compliation method, with a make file calling the compiler in a new command window for each file that needed to be compiled, with some checks for &quot;needs to be compiled&quot; or &quot;wasn&#x27;t changed&quot;. An extra script at the end of the process made sure that all the compiler instances finished before continuing with the next step. All of that work got it down to 2 minutes, or in small change cases, about 30 seconds. And another part of that was integrating some configuration data with existing files, which was simple as writing up a bunch of excel macros to do the copy&#x2F;pasting and file output. It was hooked up to a shared folder on the network so the other team could just do their part, and then my part was entirely automated. In fact, the team testing things could do everything by themselves without any input from me at that point and only needed me to answer certain questions.<p>Yes, it&#x27;s really hacky and the whole thing is entirely silly and could have been solved by using more proper tools (i.e. not a defunct make software without wildcard support for input files or Excel for configuration), but I was VERY pleased when I got it working.
actionowlabout 6 years ago
I was working on a project where we&#x27;d be printing several hundred thousand badges for several schools. We had all the data and just needed photos. The client sent us a DVD with several hundred thousand photos, upon inspection we realized that the photos where really bad:<p>- No single aspec ratio<p>- Some photos had no one in it (picture of a chair, etc)<p>- Some photos had multiple people in the photo (!?)<p>- Some photos were of such poor quality that you couldn&#x27;t make out the person.<p>It seemed some locations let the students provide their own photo. This is the first time we&#x27;d ever encountered data in this shape.<p>My company had two options: Print the data as-is (which would result in thousands of reprints) or hire some temp staff to sort through the photos.<p>I asked them to let me try and sort them over the weekend with a library I just learned about (OpenCV). I was able to write a custom OpenCV python script a little over a hundred lines long and ran it over the weekend to crop and sort the photos into several categories (based on face detection) leaving only a few thousand that had to be manually reviewed! That had a real dollar impact and felt really good.
stevekempabout 6 years ago
In the past week I&#x27;ve written a broken-link checker, in perl, to sanity-check the output of a static-site-generator.<p>I&#x27;ve also written a trivial PHP parser which was designed to match up class-definitions with comments above them:<p><a href="https:&#x2F;&#x2F;blog.steve.fi&#x2F;parsing_php_for_fun_and_profit.html" rel="nofollow">https:&#x2F;&#x2F;blog.steve.fi&#x2F;parsing_php_for_fun_and_profit.html</a><p>Both of these tools were designed to be invoked by CI&#x2F;CD systems, to flag potential problems before they became live.<p>Most of my work involves scripting, or tooling, around existing systems and solutions. For example another developer-automation hack was to automatically add the `approved` label to pull-requests which had received successful reviews from all selected reviewers - on a self-hosted Github Enterprise installation.
rahulrrixeabout 6 years ago
I built a code generator package in Kotlin which generates codes for Kotlin, Swift, Web (JS), and React-Native (TypeScript). Basically, you provide your class definition in a DSL style (Similar to TOML) and it will generate the implementation and interfaces of the bridge for different technologies.
评论 #19671803 未加载
solumosabout 6 years ago
When our company was doing more active Go development, a colleague and I built Charlatan.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;percolate&#x2F;charlatan" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;percolate&#x2F;charlatan</a><p>Ended up saving us a lot of time writing mocks for tests.
cyanide911about 6 years ago
Python 3+:<p>Blue - A dead simple event based workflow execution framework.<p>I always find it easier to model systems from an event driven perspective. Especially when you have to move fast and evolve unpredictably. I wanted a framework anyone could learn to use within 5-10 minutes. At the same time it should be able to solve all kinds of use cases that require event based coordination between tasks in a distributed environment.<p>Works well for us for simple use cases (eg. data processing workflows) and complex ones (eg. our entire retail order fulfilment system).
评论 #19681494 未加载
dhruvkarabout 6 years ago
I wrote a shipping container tracking system for ~7 shipping lines.<p>Each shipping line offers a tracking service through one of these methods -- email, RSS or website form. Our container numbers are collected into a Google Spreadsheet via our freight forwarders. Our employees use an antiquated ERP with no API.<p>The script collects relevant container numbers from the Google spreadsheet, scrapes the update and the scrapes the ERP system to enter the update.
Random_Personabout 6 years ago
I wrote a custom documentation tool that we use on all of our projects. It&#x27;s a few input fields for heading&#x2F;paragraph&#x2F;images and a few buttons. You can add as many &quot;sections&quot; as you want. It exports HTML&#x2F;CSS that you can stick in any &lt;div&gt; and it scales well, handles popups for images, and such. It&#x27;s made our life much simpler when adding documentation to our sites.
shaneclevelandabout 6 years ago
Automated discovery of late shipments eligible for a refund, which the carriers otherwise make very difficult to track. There are some services that can do this, but they take a big chunk of the refunds. We save thousands each year.<p>Many other specialized calculators and templates, which tend to be more foolproof than Excel.
评论 #19686271 未加载
schappimabout 6 years ago
Mine was hardware and software related.<p>I built a WebUSB Postal Scale and WebUSB Label Printer so our e-commerce company could print carrier shipping labels with just one click.<p>It took the process of fulfilling an order down to ~10 seconds per order.
theSageabout 6 years ago
Wrote a simple fizzbuzz server which brought down the time we spent interviewing freshers for internships&#x2F;jobs. Since we&#x27;re a small team, this had a big impact.
atomashpolskiyabout 6 years ago
My last job was at a company that develops one of the most popular mobile MMO action games in the world (with hundreds of millions of installs). It stores data in large Cassandra clusters (depending on the platform, DCs contain up to hundred nodes).<p>What I did was designing and developing a command line utility&#x2F;daemon for performing one-off and regular backups of production data. The solution is able to:<p>- work with a 24&#x2F;7 live Cassandra cluster, containing tens of nodes<p>- exert tolerable and tuneable performance&#x2F;latency footprint on the nodes<p>- backup and restore from hundreds of GBs to multiple TBs of data as fast as possible, given the constraints of the legacy data model and concurrent load from online players; observed throughput is 5-25 MB&#x2F;s, depending on the environment<p>- provides highly flexible declarative configuration of the subset of data to backup and restore (full table exports; raw CQL queries; programmatic extractors) with first-class support for foreign-key dependencies between extractors, compiled into a highly parallelizable execution graph<p>There was an &quot;a-ha!&quot; moment, when I realized, that this utility can be used not only for backups of production data, but for the whole range of day-to-day maintenance tasks, e.g.:<p>1) Restore a subset of production data onto development and test machines. This solves the issue of developers and QA engineers having to fiddle with the database, when they need to test something, whether it be a new feature or a bugfix for production. They can just restore a small subset of real, meaningful and consistent data onto their environment with just a bit of configuration and a simple command. Developers may do this manually when needed, and QA environment can be restored to a clean state automatically by CI server at night.<p>2) Perform arbitrary updates of graphs of database entities. It&#x27;s a common approach to traverse Cassandra tables, possibly with a column filter, in order to process&#x2F;update some of the attributes (e.g. iterate through all users and send a push notification to each of them). The more users there are, the longer it takes, and negatively affects the cluster&#x27;s performance and latency for other concurrent operations. Having a tool like I described, one may clone user data onto a separate machine beforehand (e.g. at night), and then just run the maintenance operation somewhere during the day, on data that it is still reasonably up-to-date.<p>All in all, it was a fun experience of devops, which I&#x27;m quite fond of. With just a little creativity and out-of-the-box thinking, there are lots of ways to improve the typical workflow of working with data.