One suggestion is to build out your use-case instead of searching for a generic technology that can handle it.<p>A general privacy-respecting health-oriented bucket of medical records is completely tangled in government regulations, organizational constraints, design-by-committee -- bureaucratic nonsense individuals don't need. It's easy to get lost in the weeds.<p>Often what you want is to know where you stand with respect to a particular condition, say, atherosclerosis and coronary artery disease.<p>The key thing is to build your model for evaluating that condition, e.g., LDL blood work, night-time systolic blood pressure, pulse-oximetry during exercise, exercise tolerance, etc. The model combines those things into a meaningful, actionable score. E.g., it should indicate whether to increase the statins or blood pressure medication, exercise or eat differently, investigate possible primary conditions or genetic risks, etc. I.e., the model represents the ongoing differential diagnosis, and should confirm or invalidate the hypothesis.<p>Then you can start to prune and evolve the model and the data. Maybe the pulse-ox data after exercise, while easy to gather, really means nothing. Maybe the blood work is sensitive to recent exercise, so you get more consistent results from take 2 days off beforehand. You compare your model to the existing models - e.g., the ASCVD risk estimator - or update it to track new studies. You start to integrate other models, tracking inflammation or oxidation. As you get trends, you'll see associations.<p>My point is that dashboard fly-over's are for executives/managers to make quick decisions about complex topics. We all want to simplify the complex, but sometimes it's actually better to dive into the details to get more clarity and understanding.<p>In that case, what might help is a way of organizing libraries of studies and presentations, building a bunch of one-off analysis tools to ingest data and compare with other models, etc. Tying them together mainly involves deciding on your data model - typically one module per source-style and another for your own integration.<p>wrt technologies, graph DB's are tempting, but most actual models work in excel pretty well, and pivot tables with graphs get you pretty far in analysis and visualization. Mathematica is great here for prototyping because it has proper units, sample API's, data sources, programmability, visualization, and a clean programming model and tutorials. I recommend it partly because it's a window onto everything current - LLM's, image recognition, big data, open-source data... The alternatives are to wander the wildlands of python libraries, or take the rigid Apple museum tour. Mathematica is more like wandering the museum yourself.