I wrote an easy introduction to these sorts of issues for the Institute of Arts and Ideas.
Contact me Curriculum Vitae. More specifically; I argue that once one gives up the idea that models are accurate representations of their targets only if they are appropriately similar, then simple and highly idealized models can be accurate in the same way that more complex models can be.
- Churchill’s Pocketbook of General Practice.
- Research Papers, Roman Frigg;
- Nanotechnology: the future is tiny.
- Random Trees: An Interplay between Combinatorics and Probability?
- The Magnetodiscs and Aurorae of Giant Planets;
- Re-Writing the French Revolutionary Tradition: Liberal Opposition and the Fall of the Bourbon Monarchy (New Studies in European History).
Their differences turn on trading precision for generality, but, if they are appropriately interpreted, toy models should nevertheless be considered accurate representations. Veritism, the position that truth, or accuracy, is necessary for epistemic acceptability, seems to be in tension with the observation that much of our best science is not, strictly speaking, true when interpreted literally.
This generates a paradox: i truth is necessary for epistemic acceptability; ii the claims of science have to be taken literally; iii much of what science produces is not literally true and yet it is acceptable. We discuss the paradox with a particular focus on scientific models and argue that there is another resolution available which is compatible with retaining veritism: rejecting the idea that scientific models should be interpreted literally. In this paper, we employ category-theoretic tools to illuminate an aspect of this puzzle. How does mathematics apply to something non-mathematical?
We distinguish between a general application problem and a special application problem. A critical examination of the answer that structural mapping accounts offer to the former problem leads us to identify a lacuna in these accounts: they have to presuppose that target systems are structured and yet leave this presupposition unexplained. We propose to fill this gap with an account that attributes structures to targets through structure generating descriptions.
These descriptions are physical descriptions and so there is no such thing as a solely mathematical account of a target system.
Philosophy of Science in Practice
In this paper I investigate the properties of social welfare functions defined on domains where the preferences of one agent remain fixed. Such domains are a degenerate case of those investigated, and proved Arrow consistent, by Sakai and Shimoji Thus they admit functions from them to a social preference that satisfy Arrow's conditions of Weak Pareto, Independence of Irrelevant Alternatives, and Non-Dictatorship. However, I prove that according to any function that satisfies these conditions on such a domain, for any triple of alternatives, if the agent with the fixed preferences does not determine the social preference on any pair of them, then some other agent determines the social preference on the entire triple.
Kuhn argued that scientific theory choice is, in some sense, a rational matter, but one that is not fully determined by shared objective scientific virtues like accuracy, simplicity, and scope. Okasha imports Arrow's impossibility theorem into the context of theory choice to show that rather than not fully determining theory choice, these virtues cannot determine it at all.
This threatens the rationality of science. In this paper we show that if Kuhn's claims about the role that subjective elements play in theory choice are taken seriously, then the threat dissolves. Many scientific models are representations. Building on Goodman and Elgin's notion of representation-as we analyse what this claim involves by providing a general definition of what makes something a scientific model, and formulating a novel account of how they represent. We call the result the DEKI account of representation, which offers a complex kind of representation involving an interplay of, denotation, exemplification, keying up of properties, and imputation.
Scientific Change | Internet Encyclopedia of Philosophy
Throughout we focus on material models, and we illustrate our claims with the Phillips-Newlyn machine. In the conclusion we suggest that, mutatis mutandis, the DEKI account can be carried over to other kinds of models, notably fictional and mathematical models. In this paper I fruitfully connect two debates in the philosophy of science; the questions of scientific representation and model, and theoretical, equivalence. I argue that by paying attention to how a model is used to draw inferences about its target system, we can define a notion of theoretical equivalence that turns on whether their models licence the same inferences about the same target systems.
I briefly consider the implications this has with respect to two questions that have recently been discussed in the context of the formal philosophy of science. I investigate van Fraassen's claim that, for a given scientist, in a given context, there is no pragmatic difference between taking a model to accurately represent a target system a physical system out there in the world and a data model a mathematical object extracted from that system.
- Conference on Scientific Fictionalism - PhilEvents?
- Munchkins Guide to Power Gaming (Steve Jackson Games)!
- Chemical Induction of Cancer. Structural Bases and Biological Mechanisms!
- Planet Taco: A Global History of Mexican Food.
- Lewis Hamilton: My Story.
I reconstruct van Fraassen's argument for this claim before demonstrating that it turns on the false premise that an act of representing that P commits the representer to the belief that P. So van Fraassen's claim that denying that models represent target systems would result in an instance of Moore's paradox fails. Unlike assertion, acts of representation fail to generate any doxastic commitments. This paper won me the Popper Prize for distinguished work by a graduate student in the philosophy department at the LSE.
In this paper we explore the constraints that our preferred account of scientific representation places on the ontology of scientific models. Yet these philosophical advances in understanding science remain isolated from the philosophical mainstream in metaphysics, epistemology, and the philosophy of language and mind. In the U. Science sets the horizons for philosophical inquiry, but recent philosophy of science has played a minimal role in shaping the conception of science that other philosophers invoke. My question about the philosophical significance of the philosophy of scientific practice gains urgency in this context.
Will the philosophy of scientific practice merely become a narrower specialist niche within an already isolated sub-discipline of philosophy? Or does attention to scientific practice promise to restore the philosophy of science to a more central place in philosophy? I believe that the philosophy of scientific practice can indeed make important contributions to philosophy more generally, in part by challenging naive conceptions of science often taken for granted elsewhere.
Today I shall only consider a single prominent issue. Logical empiricist and early post-empiricist philosophy of science notoriously struggled with this issue. McDowell argued that contemporary philosophy has failed to negotiate safe passage between two dangerous attractors. Davidson, Rorty, and other pragmatists circumvent this Myth of the Given, but only by treating conceptual spontaneity as merely internally coherent. Before we can ask about the empirical justification of a claim, we must understand what it claims.
Most philosophers distinguish these issues of conceptual articulation and empirical justification by a division of labor. They treat conceptual articulation as an entirely linguistic or mathematical activity of developing and regulating inferential relations among sentences or equations.
Spotlighting new work in the philosophy of science
Experimentation and observation then address only the justification of the resulting claims. If experience then conflicts with our theoretical predictions, we must go back to make internal adjustments to our theories and try again. Second, I shall argue, this transformation then shows that conceptual articulation is not merely a matter of spontaneous thought in language or mathematics, and thus not merely intralinguistic; instead, experimental practice itself can contribute to the articulation of conceptual understanding.
Experimental work does not simply strip away confounding complexities to reveal underlying nomic simplicity; it creates new complex arrangements as indispensable background to any foregrounded simplicity. Creating phenomena may help discern relevant laws or construct illuminating theories, but they can only indicate possible directions for analysis. It is not enough to acknowledge that experimentation also has its own ends.
Experimental practice can be integral rather than merely instrumental to achieving conceptual understanding. These are not verbal, mathematical or pictorial representations of some actual or possible situation in the world. They are not even physical models, like the machine-shop assemblies that Watson and Crick manipulated to discover 3-dimensional structures for DNA.
They are instead novel, reproducible arrangements of some aspect of the world. Today, I consider a special class of experimental systems. Heidegger, whose writings about science emphasize the practice of scientific research, forcefully characterized the role I am attributing to these systems:.
What does it mean to open up a scientific domain, and how are such openings related to the construction of experimental systems? Popular presentations of scientific progress often emphasize the replacement of error and superstition by scientific knowledge.
Yet in many areas of scientific work, the very phenomena at issue were previously inaccessible. Earlier generations could not be in error about these matters, because they could have little or nothing to say about them.
Prior to the development of those experimental practices, these corresponding aspects of the natural world lacked the manifest differences needed to sustain conceptual development. What changed the situation was not just new kinds of data, or newly imagined ways of thinking about things, but new interactions that articulate the world itself differently.
It is no accident that biologists speak of the key components of their experimental systems as model organisms, and that scientists more generally speak of experimental models. The cross-breeding of mutant strains of Drosophila with stock breeding populations, for example, was neither interesting for its own sake, nor merely a peculiarity of one species of Drosophila. The Drosophila system was instead understood, rightly, to show something of fundamental importance about genetics more generally; indeed, I shall argue, it constituted genetics as a distinct research field.
As created artifacts, laboratory phenomena and experimental systems have a distinctive aim. Most artifacts, including the apparatus within an experimental system, are used to accomplish some end. The end of an experimental system itself, however, is not what it does, but what it shows. Experimental systems are novel re-arrangements of the world that allow some features that are not ordinarily manifest and intelligible to show themselves clearly and evidently.
Sometimes such arrangements isolate and shield relevant interactions from confounding influences. Sometimes they introduce signs or markers into the experimental field, such as radioisotopes, genes for antibiotic resistance, or correlated detectors. Understanding this aspect of experimentation requires that we reverse the emphasis from traditional empiricism: what matters is not what the experimenter observes, but what the phenomenon shows.
Catherine Elgin develops this point by distinguishing the features or properties an experiment exemplifies from those that it merely instantiates. In her example, rotating a flashlight 90 degrees merely instantiates the constant velocity of light in different inertial reference frames. Elgin thereby emphasizes the symbolic function of experimental performances, and suggests parallels between their cognitive significance and that of paintings, novels, and other artworks. Light within the Michelson interferometer, by contrast, really does travel at constant velocities in orthogonal directions.
Unexemplified and therefore unconceptualized features of the world would then be like the statue of Hermes that Aristotle thought exists potentially within a block of wood. In retrospect, with a concept clearly in our grasp or better, with us already in the grip of that concept , the presumption that it applies to already-extant features of the world is unassailable. Of course there were mitochondria, spiral galaxies, polypeptide chains and tectonic plates before anyone discerned them, or even conceived their possibility. Yet this retrospective standpoint, in which the concepts are already articulated and the only question is where they apply, crucially mislocates important aspects of scientific research.
In Kantian terms, researchers initially seek reflective rather than determinative judgments. Scientific research must articulate concepts with which the world can be perspicuously described and understood, rather than simply apply those already available. To be sure, conceptual articulation does not begin de novo. Yet in science, one typically recognizes such prior articulation as tentative and open-textured, at least in those respects that the research aims to explore. When a domain of research has not yet been conceptually articulated, the systematic character of experimental operations becomes especially important.
Domain-constitutive systems must have sufficient self-enclosure and internal complexity to allow relevant features to stand out through their mutual interrelations. That scientific experimentation typically needs an interconnected experimental system is now widely recognized in the literature. Yet the importance of experimental systematicity is still commonly linked to questions of justification. Their self-vindicating stability, he argued, is achieved in part by the mutually self-referential adjustment of theories and data. As a different example, Ursula Klein showed that carbon chemistry likewise became a domain of inquiry, distinct from the merely descriptive study of various organically-derived materials, through the systematic, conceptually articulated tracking of ethers and other derivatives of alcohol Klein Leyden jars and voltaic cells played similar roles for electricity.
What is needed to open a novel research domain is typically the display of an intraconnected field of reliable differential effects: not merely creating phenomena, but creating an experimental practice. This constitution of a scientific domain accounts for the conceptual character of the distinctions that function within the associated scientific field.
We need to be careful here, for we cannot presume the identity and integrity of genetics as a domain. Yet the conception of genes as the principal objects of study within the domain of genetics marks something novel. Prior conceptions of heredity did not and could not distinguish genes from the larger processes of organismic development in which they functioned. What the Drosophila system initially displayed, then, was a field of distinctively genetic phenomena.
The differential development of organisms became part of the experimental apparatus that articulated genes, by connecting relative chromosomal locations, characteristic patterns of meiotic crossover, and phenotypic outcomes. What the Drosophila system thus did was to allow a much more extensive inferential articulation of the concept of a gene.
Concepts are marked by their availability for use in contentful judgments, whose content is expressed inferentially. Consider the judgment in classical Drosophila genetics that the Sepia gene is not on chromosome 4. Such judgments, that is, indicate a more-or-less definite space of alternatives.