- JEBEILE, Julie. Collaborative scientific practice, epistemic dependence and opacity: the case of space telescope data processing, Philosophia Scientiae, numéro spécial d’épistémologie sociale, coédité par Pascal Engel, Olivier Ouzilou et Pierre Willaime, 2018 (résumé)
Wagenknecht a récemment introduit une distinction conceptuelle (non exhaustive) entre dépendance épistémique translucide et dépendance épistémique opaque, dans le but de mieux rendre compte de la diversité des relations de dépendance épistémique au sein des pratiques collaboratives de recherche. Dans la continuité de son travail, mon but est d’expliciter les différents types d’expertise requis lorsque sont employés instruments et ordinateurs dans la production de connaissance, et d’identifier des sources potentielles d’opacité. Mon analyse s’appuie sur un cas contemporain de création de connaissance scientifique, à savoir le traitement de données astrophysiques.
- JEBEILE, Julie. Explaining with simulations. Why visual representations matter, Perspectives on Science, 2018, 26:2 (mars-avril)
Computer simulations are often expected to provide explanations about target phenomena. However there is a gap between the simulation outputs and the underlying model, which prevents users finding the relevant explanatory components within the model. I contend that visual representations which adequately display the simulation outputs can nevertheless be used to get explanations. In order to do so, I elaborate on the way graphs and pictures can help one to explain the behavior of a flow past a cylinder. I then specify the reasons that make more generally visual representations particularly suitable for explanatory tasks in a computer-assisted context.
- ARDOUREL, Vincent et JEBEILE, Julie. On the presumed superiority of analytical solutions over numerical methods, European Journal for the Philosophy of Science, 2017, numéro 7, pp. 201-220, 20p. preprint
An important task in mathematical sciences is to make quantitative predictions, which is often done via the solution of differential equations. In this paper, we investigate why, to perform this task, scientists sometimes choose to use numerical methods instead of analytical solutions. Via several examples, we argue that the choice for numerical methods can be explained by the fact that, while making quantitative predictions seems at first glance to be facilitated with analytical solutions, this is actually oftenmuch easier with numerical methods. Thus we challenge the widely presumed superiority of analytical solutions over numerical methods.
- JEBEILE, Julie. Les simulations sont-elles des expériences numériques ?, Dialogue: Canadian Philosophical Review/Revue canadienne de philosophie, volume 55, numéro 01, 2016, pp. 59-86, 28p.
Certains philosophes ont défendu qu’une analogie existait entre simulations et expériences. Mais, une fois que l’on a reconnu quelques similitudes entre elles, peut-on réellement conclure qu’en vertu de celles-ci les simulations produisent de nouvelles connaissances empiriques comme les expériences? Je soutiens que ces similitudes donnent tout au plus à l’utilisateur d’une simulation l’illusion qu’il a affaire à une expérience, mais ne peuvent fonder sérieusement une analogie entre simulation et expérience. Cependant il ne faudrait pas conclure que l’expérience est épistémologiquement supérieure à la simulation. J’analyse les cas pour lesquels simulation et expérience engendrent également des connaissances nouvelles.
- JEBEILE, Julie et BARBEROUSSE, Anouk. Empirical agreement in model validation, Studies in History and Philosophy of Science Part A, Volume 56, avril 2016, pp 168–174, 7p. preprint
Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit availabledata even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation.
- JEBEILE, Julie et KENNEDY, Ashley. Explaining with models: the role of idealizations, International Studies in the Philosophy of Science, 2016, volume 29, numéro 4, pp. 383–392, 10p. preprint
Because they contain idealizations, scientific models are often considered to be misrepresentations of their target systems. An important question is therefore how models can explain the behaviors of these systems. Most of the answers to this question are representationalist in nature. Proponents of this view are generally committed to the claim that models are explanatory if they represent their target systems to some degree of accuracy; in other words, they try to determine the conditions under which idealizations can be made without jeopardizing the representational function of models. In this paper we first outline several forms of this representationalist view. We then argue that this view, in each of these forms, omits an important role of idealizations: that of facilitating the identification of the explanatory components within a model. Via examination of a case study from contemporary astrophysics, we show that one way in which idealizations can do this is by creating a comparison case which serves to highlight the relevant features of the target system.
- BARBEROUSSE, Anouk et JEBEILE, Julie. How do the validations of simulations and experiments compare ?, dans Beisbart, C. et Saam, N. J. (eds.) Computer Simulation Validation – Fundamental Concepts, Methodological Frameworks, and Philosophical Perspectives, Springer series: Simulation Foundations, Methods and Applications, 2018
- JEBEILE, Julie. Idealizations in empirical modeling, dans Lenhard, J. et Carrier, M. (eds.) Mathematics as a tool, Boston Studies in the Philosophy of Science, 2017, pp. 212-232, 20p. preprint (résumé)
In empirical modeling, mathematics has an important utility in transforming descriptive representations of target system(s) into calculation , thus creating useful models. The transformation may be considered as the action of tools. In this paper, I assume that model idealizations could be such tools. I then examine whether these idealizations have the usual expected properties of tools, i.e. being adapted to the objects on which they apply, and being to some extent generic.
- JEBEILE, Julie. Centrale nucléaire : notre nouvelle Tour de Babel ?, dans Guay, A. et Ruphy, S. (eds.) Science, philosophie, société, IVe congrès de la SPS, Presses universitaires de France-Comté, collection Sciences : concepts et problèmes, 2017, pp. 143-158, 16p.
- JEBEILE, Julie. Nuclear Power Plant: our New Tower of Babel? dans C. Luetge et J. Jauernig (eds.), Business Ethics and Risk Management, Ethical Economy, Volume 43, Springer Science + Business Media Dordrecht, 2014, pp 129-143, 15 p. preprint
On July 5, 2012 the Investigation Committee on the Accident at the Fukushima Nuclear Power Stations of the Tokyo Electric Power Company (TEPCO) issued a final, damning report. Its conclusions show that the human group – constituted by the employees of TEPCO and the control organism – had partial and imperfect epistemic control on the nuclear power plant and its environment. They also testify to a group inertia in decisionmaking and action. Could it have been otherwise? Is not a collective of human beings, even prepared in the best way against nuclear risk, de facto prone to epistemic imperfection and a kind of inertia? In this article, I focus on the group of engineers who, in research and design offices, design nuclear power plants and model possible nuclear accidents in order to calculate the probability of their occurrence, predict their consequences, and determine the appropriate countermeasures against them. I argue that this group is prone to epistemic imperfection, even when it is highly prepared for adverse nuclear events.
- JEBEILE, Julie. Le tournant computationnel dans les sciences : la fin d’une philosophie de la connaissance, dans M. Silberstein et F. Varenne (eds.) Modéliser & simuler. Epistémologies et pratiques de la modélisation et de la simulation, tome 1, Editions Matériologiques, 2013, pp.171-189, 19 p.
Je défends la thèse selon laquelle les moyens employés pour justifier les modèles analytiques se révèlent être inopérants dans le cas des modèles de simulations numériques. À cette fin, je recense dans un premier temps les procédures d’une justification dite « traditionnelle » des modèles analytiques. Dans un second temps, je montre tour à tour qu’aucune de ces procédures ne s’applique véritablement aux modèles de simulations.
- IPCC Assessment Reports as an epistemological puzzle, with Daniel ANDLER, Anouk BARBEROUSSE et Isabelle DROUET (résumé)
The aim of this paper is to examine the conditions at which the standards of consistency, intelligibility, comprehensiveness, and timeliness are reached in IPCC ARs, despite seemingly insuperable difficulties. We shall thus focus on the specifically epistemic dimension of the collective work carried out by IPCC experts in order to provide policy makers with tools for decision support. Our goal is to explain how IPCC members manage to write down expert reports endowed with the required epistemic properties. In order to do so, we shall examine the available accounts of epistemic groups and assess whether they can help us produce the explanation we look for. Our main thesis will be that the conditions at which epistemic standards are met by the collective of experts are not well described by available theories of aggregation of belief or of epistemic groups. On the contrary, they have to be found in specific procedures that are determined by the topic, i.e., climate change, and associated emergency, and invented step by step by the scientific communities at hand.
- Computer simulation, experiment and novelty
It is often said that computer simulations generate new knowledge about the empirical world in the same way experiments do. My aim is to make sense of such a claim. I first show that the similarities between computer simulations and experiments do not allow them to generate new knowledge but at least contribute in framing a similar context of discovery in both cases. I contend that, nevertheless, computer simulations and experiments yield new knowledge under the same epistemic conditions, independently of any features they may share.
- Verification & Validation of simulations against holism, avec Vincent ARDOUREL
We discuss a specific form of refutation and confirmation holism that occurs for computer simulations : the mere computational aspects in a simulation model may interfere with the representational assumptions. For that purpose, we focus on the Verification & Validation (V&V) methodology that aims at preventing such holism. Yet it has been argued that V&V is doomed to lead to holism since the verification and validation stages are entangled. Against this, we argue that holism can in practice be overcome gradually depending on the requirements of the scientists. We show that the scientists endeavour to provide mathematical a priori justifications for verification, and formal methods are currently developed, which reach highest levels of requirement.
- Learning from a toy model: the Kac ring (résumé)
Scientific models misrepresent their target in that some features of the target are omitted and some others are idealized. An important question is therefore how scientists can genuinely learn something from models. Answers to this question have centered on an analysis of scientific models as approximate descriptions of their target. However such an analysis falls short in accounting for how toy models can teach us things about actual empirical systems. Toy models are works of fiction par excellence in that their primary aim is not to accurately describe a particular system but rather, based on strong distortions, to help users to infer general features of a certain class of systems. In this paper, I contend that toy models are better analyzed in terms of a specific kind of fiction, i.e. scientific caricatures, than in terms of approximate descriptions. In arguing for such an account of toy models, I develop a case study. I elaborate on the way the Kac ring model is used to study an attempt at explaining the second law of thermodynamics.
- Ensemble of climate models or missed opportunity ?, avec Michel CRUCIFIX
According to a common claim, the multi-model ensemble in the Coupled Model Intercomparison Project is not designed to properly span uncertainty ranges because it relies on self-selection by the modeling groups. We offer an argument for this : in building their models, climate scientists make choices of representation that can be driven by contextual values as well as collective and personal interests. We then mitigate the claim by arguing that even the possibility of coordinating worldwide model development, so to avoid values and interests, is not a guarantee that the ensemble will be well designed for quantifying uncertainties.
- Risky technologies, democratic decisions and sensitive information: how to deal with secrecy?, avec Cyrille IMBERT
In this paper, we discuss at more length the dilemma of secrecy in the case of risky technologies. A specificity here is that openness — i.e. access to relevant information — is crucial to make democratic choices about public policies regarding technological activities involving potential risks. However openness can come with potentially large drawbacks. We argue that making secrecy temporary and delaying accountability is often inappropriate and unlikely to have virtuous effects on decision-makers. We second consider solutions in terms of unequal access to information and discuss how much disclosing sensitive information and delegating power to limited groups of experts, elected officials and/or citizens can contribute to solving the problem. We in particular emphasize that involving citizens in decision-making about risky technologies is a promising option to reconcile secrecy and the need of well-informed decisions issued from deliberations which are sensitive to the preferences of the people.
- Weak Emergence in Nature, avec Anouk BARBEROUSSE
There is a long tradition of attempts at defining emergence. One of the most recent items in this series is Mark Bedau’s definition, according to which macrostate P of a system S with microdynamic D is weakly emergent if and only if P can be derived from D and S’s external conditions but only by simulation. His proposal does not only aim at accounting for computationally emergent properties, that is, properties of artefacts implemented on computers. It is also a claim about the existence of emergent phenomena in nature. In the paper, we first examine whether his definition can really account for emergent properties in computational artefacts as the derivability requirement is doubly questionable and seems to make the definition irrelevant even within the computational domain. We then discuss the scientific and metaphysical implications of Bedau’s claim about the existence of weakly emergent properties in nature.