Tuesday 14 June 2022

Credibility of predictive Data-driven vs Knowledge-driven models: a layperson explanation

In science the concept of truth has a meaning quite different than its colloquial use.  Scientists observe a natural phenomenon and formulate different hypotheses on why things happen in the way we observe them. There are many ways to formulate such hypotheses, but the preferred one is to express them in quantitative, mathematical terms, which makes it easier to test whether they are well founded.  

Once a certain hypothesis is made public, all scientists investigating the same natural phenomenon start to design experiments that could demonstrate that the hypothesis is indeed wrong.  It’s only when all possible attempts have been made, and the hypothesis has resisted to all such attempts to prove it wrong, that we can call this a “scientific truth”. What that means is “so far no one could prove it wrong, so we temporarily assume it to be true”.

Achieving a scientific truth is a long and costly process, but it is worth it: once a hypothesis becomes a scientific truth, or as we will call it from now on, scientific knowledge, it can be used to make predictions on how to best solve problems related to the natural phenomenon it refers to. At the risk of oversimplifying, physics aims to produce new scientific knowledge, which engineering uses to solve the problems of humanity. 

For the purpose of this note, is important to stress that the mathematical form chosen to express the hypothesis cannot deny all pre-existing scientific knowledge accumulated so far.  For example, we are quite sure that matter/energy cannot be created or destroyed, but only transformed; this is called law of conservation in physics. Hence, any mathematical form we use to express a scientific hypothesis must not deny the law of conservation.


But the need to solve humanity problems cannot wait for all the necessary scientific knowledge to be available, considering it may take even some centuries for scientists to produce it.  Thus, scientists have developed methods that can be used to solve practical problems even if no knowledge is available, as long as there is plenty of quantitative data obtained from observing the phenomenon of interest.  When the necessary scientific knowledge is available, we solve problems by developing predictive models that are based on such knowledge; otherwise, we use models that are developed only using observational data.  We call the first type knowledge-driven models, and the second type data-driven models.  The first type of models includes those built from the scientific knowledge provided by physics, chemistry, and physiology, for example.  Data-driven models include statistical models and the so-called Artificial Intelligence (AI) models (e.g. machine-learning models).


Now, if the problem at hand is critical (for example in case a wrong solution may threaten the life of people) before we use a model to solve such problem we need to be fairly sure that its predictions are credible, which means sufficiently close to what does happen in reality.  Thus, for critical problems, assessing the credibility of a model is vital. Most problems related to human health are critical, so it should not be a surprise that assessing the credibility of predictive models is a very serious matter in this domain. Unfortunately, assessing the credibility of a data-driven model turns out to be very different from assessing the credibility of a knowledge-driven model.  While the precise explanation of why these are different is quite convoluted and require a solid grasp of mathematics, here we provide a layperson explanation, aimed to all healthcare stakeholders who by training do not have such mathematical background, but still need to make decisions on the credibility of models.


In order to quantify the error made by a predictive model, we need to observe the phenomenon of interest in a particular condition, measure the quantities of interest, then reproduce the same conditions with the model, and compare the quantities it predicts to those measured experimentally.  Of course, this can be done only in a finite number of conditions; but how can be sure that our model will continue to show the same level of predictive accuracy when we use it to predict the phenomenon in a condition different from those we tested?  Here is where the difference on how the model was built plays an important role. 

For knowledge-driven models it can be demonstrated that those mathematical forms chosen to express that knowledge, forms that must be compatible with all pre-existing scientific knowledge, ensure that if the model makes a prediction for a condition close to one tested, also its prediction error will be close to the one quantified in the test. This allows us to assume that once we quantified the prediction error for a sufficiently large number of conditions within a range, for any other condition within that range of conditions the prediction error will remain comparable.  The benefit of this is that for knowledge-driven models we can conduct a properly designed validation campaign, at the end of which we can state with sufficient confidence the credibility of such model.

However, this is not true for data-driven models.  In theory, a data-driven model could be very accurate for one condition, and totally wrong for another close to the first one.  So, the concept of credibility cannot be stated once and for all.  Assessing the credibility of a data-driven model is a continuous process; while we use the model, we periodically need to confirm that the predictive accuracy remains within the acceptable limits, by comparing the model’s predictions to new experimental observations.


To further complicate the matter, sometime a model is composed of multiple parts, some built using a data-driven approach, others built using a knowledge-driven approach.  In such complex cases the model must be decomposed into sub-models, and each needs to assessed in term of credibility in the way most appropriate for its type.


In conclusion, when no scientific knowledge is available for the phenomenon of interest, only data-driven models can be used. In that case, credibility assessment is a continuous process, like quality assessment. After the model is in use, we periodically need to reassess its predictive accuracy against new observational data.  Instead, when scientific knowledge is available, knowledge-driven models are preferable because their credibility can be confirmed with a finite number of validation experiments.




Friday 8 January 2021

Positioning In Silico Medicine as a computationally-intensive science: a call to arms

In the last few months I followed with growing interest the recent developments of computational sciences, and I felt compelled to raise a warning, which becomes a call for engagement to the entire In Silico Medicine community.

At risk of oversimplifying, with the launch of the EuroHPC initiative the European Commission has made a clear move toward two directions: exascale computing (development and effective use of new computer systems capable of 10^18 floating point operations per second) and quantum computing (use of quantum phenomena to perform computation).  

Because of the strategic nature of this initiative, all computational sciences are slowly being divided into those that are considered computationally-intensive, and those that are not: to the first will be asked to contribute to the definition of the specifications of these new exascale and quantum computing system (codesign); as part of this they  most likely will receive dedicated funding, directly of by earmarking funding for solutions that exploit high performance computing (HPC), as we already saw in some Covid-related call in H2020.  I am less familiar with the other regions of the world, but my impression is that the political agenda around the strategic value of HPC is the same in USA, China, Japan, India, etc.  Thus, I dare to say that probably the same trend is being observed everywhere.

There are some domains that are unquestionably seen as HPC science: Weather, Climatology and solid Earth Sciences; Astrophysics, High-Energy Physics and Plasma Physics; Materials Science, Chemistry and Nanoscience. When we look at Life Sciences and Medicine, the picture is blurred: there is a clear case for molecular simulations, but much less clarity for single-cell system biology, and even less for system physiology.  In Silico Medicine, intended as the clinical and industrial application of computational biomedicine methods, is in my opinion at the present far from making a clear case for being and HPC Scientific domain.  2013 Chemistry Nobel Prize was given to a group of computational chemists; it will take still some decades before we can expect a medicine noble prize by a computational researcher.

Having worked in this field from its beginnings 20 years ago, I can understand why most of the in silico medicine researchers see the computational challenge as an immaterial detail: as community our main focus is still on the credibility in the clinical and regulatory context of our predictions.

But I am worried that if we lose this train, it might take a long while before another pass.  I think that as we prepare for Horizon Europe or for the next round of NIH and NSF funding, we need to start thinking seriously where is the added-value of porting our applications to HPC architectures, and to develop a HPC Science research agenda where scalability is key.  We need to think grand science, from a computational point of view.  And we need to pursue computational grand challenges: can we simulate a phase III clinical trial by running 1000 patient-specific models?  Can we model all cells in a whole tumour? Can we model the electrophysiology of all the cardiomyocytes in a human heart?  Can we couple a whole fluid-electro-mechanical model of the heart with a full fluid-chemo-mechanical model of the lungs?

Another thing we need to start to work as a community is the idea of the Virtual Physiological Human.  There is a funny story here: soon after the term was coined in 2005, we started to defend by those who were asking: Are you planning capture the entire human physiology in a single computer model?  At that time of course the answer was no, not even close.  But I think that now this idea should be brought back, if not as a feasible goal any time soon, at least as something to aim to.  We have great models for the bones, joints and muscles; for the heart; for the pancreas, for the liver; for the lungs.  Can we aim to a neuromusculoskeletal model of human movement?  Or a cardiovasculorespiratory model of the body oxygenation dynamics?

This is a grand challenge for our community, and I call you all to arms.  Make sure all those who are thinking in this direction, seniors and juniors, join the #Scalability channel:

https://insilicoworld.slack.com/archives/C0151M02TA4 

if you click the link and you get a message saying that you are not a member yet, follow this other link and request to join:

http://insilico.world/scalability-support-channel/

I also ask all of you to start posting your scalability challenges.  If you do not  have any, this means you are not thinking big enough, so try again :-).  

We need to get as soon as possible a good representation of what are the HPC needs for the In Silico Medicine community, and joining the #Scalability channel is the most effective way. As a bonus, the top HPC experts in Europe who are partners in the CompBioMed Centre of Excellence will be happy to share their wisdom with you through the same channel and help you to address your scalability issues in the most effective way.


 

Saturday 21 November 2020

On the regulatory validation of AI models


Accepted epistemology suggests that a theory cannot be confirmed, since this would require infinite tests, but only falsified. So, we can never say a theory is true, but only that it has not disproved so far.  However, falsification attempts are not done randomly, they are purposely crafted to seek for all possible weaknesses. So, the process empirically works: all theories that resisted falsification for some decades were not falsified subsequently, at most extended. For example, special relativity theory addressed the special case of bodies travelling close to the speed of light but did not truly falsified the second law of dynamics.  In fact, if you write Newton’s law as F = dp/dt, where p is the momentum, even the case where the mass varies due to relativistic effects is included.

But once a theory has resisted extensive attempt of falsification, for all practical purposes we assume it is true, and use it to make predictions.  However, our predictions will be affected by some error, not because the theory is false, but because of how we use it to make predictions.  An accepted approach suggests that the prediction error of a mechanistic model can be described as the sum of the epistemic error (due to our imperfect application of the theory to the physical reality being predicted), aleatoric error (due to the uncertainty affecting all the measured quantities we use to inform the model), and numerical solution errors, present only when the equations that describe the model as solved numerically.  For mechanistic models, based on theories that resisted extensive falsification, validation means simply the quantification of the prediction errors, ideally in all its three components (which gives raise to the verification, validation and uncertainty quantification (VV&UQ) process).

A phenomenological model is defined as a predictive model that does not use any prior knowledge to make predictions, but only prior observations (data). Analytical AI models are a type of phenomenological models. When we talk about validation for a phenomenological model, we do not simply mean the quantification of the prediction error; since the phenomenological model contains an implicit theory, its validation is more related the falsification of a theory. And while an explicitly formulated theory can be purposely attacked in our falsification attempts, the implicit nature of phenomenological models forces us to use brute force approaches to falsification.  This bring us to the curse of induction: a phenomenological model is never validated, we can only say that with respect to the validation sets we used to challenge it so far, the model resisted our falsification attempts. But in principle nothing guarantees us that at the next validation set the model will not be proven totally wrong.

Following this line of thoughts, one would conclude that locked AI models cannot be trusted.  The best we can do is to formulate AI testing as a continuous process; as new validation sets are produced the model must be retested again and again, and at most we will say it is valid “so far”.

But the world is not black and white.  For example, while purely phenomenological models do exist, purely mechanistic models do not. A simple way to prove this is to consider that since the space-time resolution of our instruments is finite, every mechanistic model has some limits of validity imposed by the particular space-time scale at which we model the phenomenon of interest.  The second law of dynamics is not strictly valid anymore at the speed of light, and it also shakes at the quantum scale.  To address this problem, virtually EVERY mechanistic model must include two phenomenological portions that describe everything bigger than our scale as boundary conditions, and everything smaller than our scale as constitutive equation. All this is to say that there are black box models and grey box models, but not white box models. At most light grey models.  So what?  Well, if every model includes some phenomenological portion, in theory VV&UQ cannot be applied for the arguments above. But VV&UQ works, and we trust our life on airplanes and nuclear power stations because it works.

Which bring us to another issue. Above I wrote: “in principle nothing guarantees us that at the next validation set the model will not be proven totally wrong”. Well, this is not true.  If the phenomenological model is predicting a physical phenomenon, we can postulate some properties.  One very important that come from the conservation principles is that all physical phenomena show so degree of regularity. If Y varies with X, and for X =1, Y =10, and for X = 1.0002 Y = 10.002, when X = 1.0001, we can safely state that it is impossible that Y = 100,000, or 0.0003.  Y must have a value in the order of 10, because the inherent regularity of physical processes.  Statisticians recognise this from another perspective (purely phenomenological) by saying that any finite sampling of  a random variable might be non-normal, but if we add them eventually the resulting distribution will be normal (central limit theorem).  This means that my estimate of an average value will converge asymptotically to the true average value, associated to an infinite sample size. 

Thus, we can say that the estimate of the average prediction error of a phenomenological model as we increase the number of validation sets will converge to the true average prediction error asymptotically.  This means that if the number of validation sets is large enough the value of the estimate will change monotonically, and also its derivative will decrease monotonically. This makes possible to reliably estimate an upper boundary of the prediction error, even with a finite number of validation sets.

We are too early in the day to come to any conclusion, on if and how the credibility of AI-based predictors can be evaluated from a regulatory point of view.  Here I tried to show some aspects of the debate. But personally, I am optimistic: I believe we can reliably estimate the predictive accuracy of all models of physical processes, including those purely phenomenological.

A final word of caution:  this is definitely not true when the AI model is trying to predict a phenomenon affected by non-physical determinants. Predictions involving psychological, behavioural, sociological, political or economic factors cannot rely on the  inherent properties of physical systems, and thus in my humble opinion such phenomenological models can never be truly validated.  I can probably validate an AI-based model that predict the walking speed from the measurement of the acceleration of the centre of mass of the body, but not a model that predict whether a subject will go out walking today.


Saturday 25 April 2020

Fight the good fight: the wind raises

It is a long time I do not post anything to my private blog.  Restarting from scratch in Bologna, after then seven years in Sheffield, kept me quite busy in the last period.  But yesterday I felt the need, once again, to express some private ideas.

The trigger was that I watched an anime, an animation movie, written and directed by  Hayao Miyazaki: the wind raises. This 2013 movie tells the story of a young Japanese aeronautic engineer, who develops new amazing airplanes while his country prepares for WWII, and his wife dies of tuberculosis.  It is a beautiful story, told with absolute grace and total lack of rhetorics.

but the real trigger was a single scene, where the main character travels by train to reach his young wife whose condition's turned for the worse. While he travels, worried sick for his beloved, he cries on the sheets of the calculations he is doing for his new, also beloved, airplane.

There are three themes, distinct but entangled, that developed in my head while I was watching.

The first is the fortune to have a true calling.  To warn my students of the risks our work poses in term of mental health, I always insist that greatness requires obsession, but obsession damages your life; any researcher needs to find a balance between these two.  But watching The Wind Rises reminded me the many times in my life, when my heart was broken, my science was always there for me, ready to absorb me entirely, taking me away from mundane pains. As Jiro (the main character) travel he cries, his pains are not forgotten, but he keeps working with his slide rule (slipstick) to finish his calculations.  And of all callings, being a true engineer is an amazing one. Gianni Caproni, the Italian aeronautic engineer, says in one of Jiro's dreams: “But remember this, Japanese boy... airplanes are not tools for war. They are not for making money. Airplanes are beautiful dreams. Engineers turn dreams into reality.”

The second theme is the social responsibility of engineers. Jiro understands that his beautiful plane will feed the imperialistic expansionism of the Samurai class, but somehow separates himself from this, he is turning his dream into reality, what other people will do with that is not his responsibility. Today is April 25th, which in Italy is the anniversary of the liberation from nazi-fascism.  WWII claimed 70-80 million lives; beside Hiroshima and Nagasaki, we could remember the bombing of Tokyo in 1944, which killed over 100,000 persons, mostly burned alive. If you visit Tokyo, go the Edo-Tokyo Museum, there is a whole section on this horror.  You cannot be a good engineer if you are not a humanist, who trust humanity will eventually put at good use our discoveries.  But when the link of your research with military applications is so evident, as it is the intention of your government to use it for aggression, I think we should not forget that in addition the moral obligation to our dreams, we also have those due to our being citizens and humans.  So let me say it with one syllable words: I believe Jiro (both the fictional character, and the real Jiro Horikoshi, who designed the  Mitsubishi A6M Zero fighters) was wrong to continue his work knowing what he knew.

The third, totally unrelated (or maybe not) is a reflection on Tuberculosis (TB).  To date the coronavirus has killed nearly 200,000 people worldwide.  I could not find any serious projection, but let us say that one year after its start we will have three times this number, say 600,000 death.  The world is mobilised, every single research funding agency is rerouting money to support Covid-19 research.  But what about TB, which kills over 1.8 million people worldwide every year?  I am all for this renewed attention for communicable diseases, but please do not forget those that have been around for a while, only because they are not common in developed countries. in 2016 Malaria costed 63 million DALY (Disability Adjusted Life Years), HIV 59, TB 45, and other communicable disease 23.  This is nearly 200 million years of life lost to these diseases.  TB is a horrible disease, that infects you but remains silent until when you are weaker or older, and then it strikes you down.  The Bacille Calmette-Guérin vaccine has been around since 1921, but this has not eradicated the disease in many countries.

To quote the Bard, we few, we happy few, we band of brothers: scientists and engineers of the world, we need be happy for our calling, because we turn dreams into reality.  But we must channel this calling for the good of humanity, and when there is the risk that our discoveries can be used unethically, we must say no.  Instead, we have to turn our dreams, our creative energies, toward fighting the good fight, for example getting rid of ALL communicable diseases affecting humans anywhere in the world.










Tuesday 2 July 2019

Thank you Ansys

For many years I got Ansys Inc. support in the form of free software licenses to be used for educational and research activities.

The current Ansys Academic Program requires that we acknowledge in our web page our membership to such program.  So here I am acknowledging.

Thank you, Ansys.


Tuesday 30 October 2018

Why I left Sheffield

Tomorrow will be my last day of employment for the University of Sheffield, and the first for the University of Bologna.

One of the many blessing of this strange job of mine is that over the years I met a lot of people, and with many of them I developed a fairly close relationship.  Seven years ago, when I announced that after over 20 years I would have left the Rizzoli Institute in Bologna to take a chair of biomechanics at the University of Sheffield, quite a few wrote me asking why.  Most of them were familiar with the pleasant life that Bologna can offer, and the only Sheffield they knew was that of The Full Monty. Eventually the answer to that question became clear: I moved to Sheffield to make true the dream of a lifetime, the creation of a very large research institute entirely dedicated to in silico medicine. 

And Sheffield delivered: the dream came true, and it is called Insigneo.  Also, Sheffield turned out to be much better then I thought.  When I left for good a few weeks ago, I did it with strong emotions, as this city welcomed me with open arms, I found a lot of friends, and I enjoyed there some of the most exciting years of my life.

So tonight as I change my status on LinkedIn, I am sure I will start to get many emails asking why I am leaving Sheffield and Insigneo.  But this time I have my Blog, so here is the explanation, before you ask it.  The answer is made of three parts: Brexit, Cycles, Roots.

Brexit
There is not much to explain here I guess. The reasons that attracted so many of us from other European countries to UK are all at risk of disappearing on April 1st, 2018.  I came to UK because it was one of the most inclusive countries with an amazing level of multiculturality in the universities, second only to the USA.  I came because it was a country with solid economy, with a sensible government, and with a special commitment toward research and the future in general.  And, most important, I came because UK was part of Europe, and everywhere in Europe is home. If not for a miracle, on April 1st UK will wake up wrapped in its "little England" xenophobic nostalgia, little island not part of Europe anymore, ready to face a severe economic crisis doubled by a severe political crisis.  I was not given the opportunity to vote for the referendum, but I can express my total disagreement with this decision of the people of the UK by leaving the country.

Cycles
To be honest during most of my career I experienced a lot of professional frustration; while this for sure added fire that drove me to some extents, it also gave me a gastric ulcera.  The experience as Director of the Insigneo institute was amazing, and for the first time a couple of years ago I stopped by professionally frustrated.  Insigneo was everything I wanted and more, and I was happy, and satisfied. So satisfied that I started to ask myself if this was what I wanted to do until a retired, in ten more years or so.  I looked back, and God knows how many jobs I had: I was an engineer in industry, a programmer, a laboratory rat, an entrepreneur, a consultant, a research manager.  But the happiest of all jobs for me is to be an academic.  You teach, supervise a few master and PhD theses, do research by leading a small group of post-docs. That is what I want to do until a retire.  Building Insigneo was a trip, but my cycle is over, now it is time for someone else who has something to prove.  "Stay foolish, stay hungry" said Steve Jobs; the cycle is over, I am not hungry anymore, time to move on.

Roots
In these seven years I spent a lot of time with other emigrants, many from my own country.  But there is a big difference between them and me: most of them moved to UK in their 20s, early 30s.  My wife and I moved in UK when we were 50.  We had a very good life in Sheffield, but our roots back home are deep, and they have been calling us back with louder voices as we get older.

-------------

But of course nothing of this would have mattered if I did not received this fantastic offer from the University of Bologna.  From tomorrow morning I will be Professore Ordinario (Full professor)  of biomechanics in the department of industrial engineering; I will also go back to lead the Medical Technology Lab at the Rizzoli institute, where most of my research career took place. 

Many old acquaintances I meet in these days ask me: "are you happy?". My answer is always "of course!".  But the truth is that I am happy and I am sad.  This change is what I wanted and needed to some extents, but I will miss Sheffield, Insigneo and all my friends and colleagues.  I might even come to miss fish & chips, in time!






Saturday 11 August 2018

Scienza e politica? Individuale e collettivo

La vittoria dei 5Stelle alle ultime politiche ha riportato al centro del dibattito sui media ed i social networks la questione delle vaccinazioni obbligatorie.  In molta della comunicazione politica recente sembra esserci un scontro tra scienziati e politici, e da qui l'esigenza di enunciare il primato della politica sulla scienza.  In alcuni casi i toni sono così esasperati che sembra quasi che il dibattito tra scienza e politica sia ancora quello al tempo di Galileo, quando la chiesa mal tollerava che qualcuno con un cannocchiale potesse mettere in discussione le parole dell'autorità ecclesiastica.

Ma questo e' solo l'effetto devastante della banalizzazione totale del dibattito, qualunque dibattito, su i social network.  In realtà la stragrande maggioranza degli italiani concorda sul fatto che la scienza il metodo migliore per investigare la realtà materiale che ci circonda e che il metodo scientifico assicura che mentre scienziati individuali possano avere torto, la scienza nel suo complesso produce conclusioni che sono quanto di più vicino alla verità l'umanità sia mai stata in grado di produrre. Ad esempio è una verità scientifica che le vaccinazioni obbligatorie sono necessarie a garantire la salute pubblica.  

Il fatto che sia una verità scientifica non vuol dire che singoli individui non possano decidere diversamente, per una varietà di motivi.  Ad esempio la chiara correlazione scientifica tra fumo e cancro ai polmoni non mi ha impedito di fumare fino a 50 anni.

La decisione su quali regole imporre per via legislativa ai cittadini di uno stato è una decisione politica. Nel decidere un provvedimento legislativo un bravo politico deve tenere conto delle evidenze scientifiche, assieme a tanti altri fattori sociali, culturali, economici, ecc.   Anche se di solito una legge che contraddice patentemente le evidenze scientifiche sul lungo termine raramente risulta essere una buona idea, sono convinto che anche in questo caso la stragrande maggioranza degli italiani concorda sul fatto la scienza è al servizio della politica, e non viceversa.

Ma allora di cosa stiamo discutendo?  Il vero dibattito, che sta alla radice della discussione sulla obbligatorietà delle vaccinazioni, su gli 8 Km del gasdotto TAP, e di tante altre cose è come bilanciare i diritti individuali di ogni cittadino con quelli della collettività.  Questo è un vero dilemma, che non ammette soluzioni esatte che lasciano tutti contenti. 

Il problema è che in Italia, come in tanti altri paesi, abbiamo avuto per lungo tempo una opposizione populista, che per ogni argomento si schierava con quelli che si opponevano ad una decisione del governo. Questi, a prescindere da chi fossero, quanti fossero, e a cosa si opponessero erano il "popolo" che si opponeva all'elite di cui il governo di turno era portavoce.  

La popolarità dei populismi oggi giorno è legata al fatto che dall'opposizione non hai bisogno di fare alcuna sintesi di bisogni conflittuali. Ma questo problema esplode quando i populismi vincono le elezioni e diventano governo. Se rendiamo i vaccini obbligatori le mamme anti-vaccini saranno contrarie; se non li rendiamo obbligatori le mamme con figli immunodepressi saranno contrarie. 

Vedendo quello che sta succedendo in giro per il mondo dove poteri populisti sono al governo, la soluzione è semplice: non si affrontano i problemi per davvero, si rende la politica l'arte dell'impossibile. E quindi si fa un decreto che conferma l'obbligo, ma consente l'autocertificazione, che però non verrà accettata dal molte scuole, ecc. Insomma un casino, in cui tutti potranno pretendere di avere vinto.

La scienza non ha mai smesso fare il suo lavoro, e neanche la politica. Chi si è preso una lunga vacanza, in forma di sonno della ragione, è la cittadinanza, nel senso della responsabilità individuale del bene collettivo.

"El sueño de la razón produce monstruos" riporta questa acquaforte di Francisco Goya.  È ora che risvegliamo il nostro senso di cittadinanza, o presto ci troveremo a confrontarci con i mostri.





English summary: this blog is mostly written in English.  However, today entry is related to a debate taking place on the Italian media and social networks, so I decided to write it in Italian. science has produced solid evidences that mandatory vaccinations are beneficial for public health.  The decision on the vaccination policy is political; but any wise politician should inform their policies on the conclusions of the scientific community.  But in spite the rhetorics of the Italian political debate is not whether vaccinations are good for public health, but rather if we as western societies do still care about public health, or more in general how we resolve the need for "public good" with that of individual freedom.  And this is a political question, not a science one.