BSPS 2013 Accepted Papers with Abstracts - College of Social ...

BSPS 2013 Accepted Papers with Abstracts - College of Social ...

BSPS 2013 Accepted Papers with AbstractsCharlotte Werndl. Justifying Typicality Measures of Boltzmannian StatisticalMechanics and Dynamical SystemsAbstract: An important question in the foundations of Boltzmannian statisticalmechanics is how to interpret the measure defined over the possible states of a gas. Arecent popular proposal is to interpret it as a typicality measure: it represents therelative size of sets of states, and typical states show a certain property if the measureof the set that corresponds to this property is close to one. That is, a typicality measurecounts states and does not represent the probability of finding a system in a certainstate.However, a justification is missing why the standard measure in statistical mechanics isthe correct typicality measure. Also for dynamical systems an interpretation needs to befound for the measures used, and one suggestion is to interpret them as typicalitymeasures. Here again the question arises how to justify particular choices of measures,and this question has hardly been addressed. This paper attempts to fill this gap.First, the paper criticises Pitowsky (2012) – the only justification of typicality measuresknown to the author. Pitowsky's argues as follows. Consider the set S of all infinitesequences of zeros and ones. By approximation with the measures defined on the sets offinite sequences of zeros and ones, a unique measure m can be defined on S. Let f bethe map which assigns to each infinite sequence s of zeros and ones the number in theunit interval whose binary development is s. When f is used to map the measure m on Sto a measure on the unit interval, one obtains the uniform measure. Hence the uniformmeasure is the correct typicality measure. This paper argues that Pitowsky's argument isuntenable. It is unclear why f and not another function is used to map the measure m tothe unit interval. Furthermore, there are counterexamples: for many systems on the unitinterval the standard measure is not the uniform measure.Then a new justification of typicality measures is advanced. It is natural to require thattypicality measures should be invariant. Furthermore, assume that gases are epsilonergodic.A major argument of the paper is that a theorem by Vranas (1998) intended fora different purpose can be used to justify typicality measures. This theorem says that,for epsilon-ergodic systems, any measure which is invariant and translation-closeapproximately equals the standard measure (translation-closeness is the condition thatthe measure of a slightly displaced set only changes slightly). Hence, if translationclosenesscan be justified, the standard measure can be regarded as the correcttypicality measure.The crucial remaining question is how to justify translation-closeness of typicalitymeasures. This paper argues that a justification based on Vranas (1998) fails becausethere are irresolvable technical problems. Then a new justification is proposed: becauseslightly displaced sets cannot be distinguished by measurements, it is reasonable torequire that the typicality measure of a slightly displaced set only changes slightly.Consequently, the standard measures of statistical mechanics and dynamical systemscan by justified as typicality measures.Mirko Farina. Re-Thinking Neuroconstructivism through Dynamic (Neuro)-EnskilmentAbstract: In this paper I discuss two views - standard neuroconstructivism, anddynamic neuro-enskilment - that explain human cognitive and cortical development fromdifferent standpoints. I then compare these views and critically analyse the linksbetween them. I do so to demonstrate that standard neuroconstructivism, in order to

fully account for recent empirical findings, needs to be updated and radicalized along thelines envisaged by the dynamic neuro-enskilment view.Standard neuroconstructivism (Mareschal et al. [2007]; Karmiloff-Smith [2009];Westermann et al. [2010]) characterizes development as a trajectory that is shaped bymultiple interacting biological and environmental constraints, in which complexrepresentations develop based on earlier and simpler ones. This increase inrepresentational complexity is realized through a progressive elaboration of functionalcortical structures, which are not selected from a constrained juvenile stock but ratheremerge in an experience-dependent way. Standard neuroconstructivism thereforeargues for progressive elaboration of neural structures with earlier structures formingthe building blocks for later structures and describes development within a perspective ofcontext-dependent learning. Thus, standard neuroconstructivism calls for consistencybetween the neural and cognitive levels in characterizing developmental trajectories,posits the interrelatedness (on multiple timescales) of brain, body, and world, andargues that the interweaving of all these factors is crucial for cognitive development.Congruent with standard neuroconstructivism, I argue that although there might besome prewired, softly specialized circuits in place in the brain prior to birth, theirorganization as well as their cognitive functions can be altered and rewired throughspecific environmental exposure. Unlike standard neuroconstructivism, however, I claimthat the constraints imposed on development by experience-dependent activities in earlystages of life can be themselves rewired throughout the lifespan due to a second periodof synaptic plasticity found in adolescence; and I stress the pivotal role that evolvingsocio-cultural environs play in influencing and re-directing the developmental pathduring adulthood.I thus defend a fully developmental account of our cognitive capacities, a view which Ilabel ‘dynamic neuro-enskilment’ that theorizes a profound dependence of brainorganization and cortical development on both patterned practices and cultural/socialactivities. This understanding in particular integrates ideas about distributedenculturated cognition/cognitive ecologies (Roepstorff and Niewoehner [2010], Hutchins[2010]) with new work on neural plasticity in cultural neuroscience (Chiao [2009],Kitayama and Park [2010]). The dynamic neuro-enskilment view thus emphasizes thepower of rewiring throughout the entire lifespan, stresses the role of culture and sociocultural/technologicalenvirons in moulding the functioning and processing of our brains,and assumes that regular, patterned activities shape the human mind thoughembodiment and internalization. In line with this view, I hypothesize that (adult)entrenchment in different socio-cultural contexts can generate completely dissimilarneural responses, leading to structurally different, cognitively diverse, and deeplyenculturated brains.To corroborate this hypothesis I appeal to recent empirical findings in culturalneuroscience. These findings reveal the role that expertise and experience-basedneuronal plasticity play in redirecting the developmental path in adulthood. From theanalysis of these results, I extract the conclusion that the original (standard)neuroconstructivist framework needs to be extended and radicalized along the linesenvisaged by the dynamic neuro-enskilment view, and argue that the pay-off of thisradicalization is highly desirable.REFERENCESChiao, J.Y. (ed.) [2009]: Cultural Neuroscience: cultural influences on brain function,Progress in Brain Research, Volume 178, New York: Elsevier.Hutchins, E. [2010]: ‘Cognitive Ecology’, Topics in Cognitive Science, 2, pp. 705,715.Karmiloff-Smith, A. [2009]: ‘Nativism versus Neuroconstructivism: Rethinking the Studyof Developmental Disorders’, Developmental Psychology, 45, pp.56-63Kitayama, S., and Park, J. [2010]: ‘Cultural neuroscience of the self: Understanding thesocial grounding of the brain’, SCAN, 5, pp. 119–29.

Mareschal, D., Johnson, M.H., Sirois, S., Spratling, M.W., Thomas, M.S.C., andWestermann, G. [2007]: Neuroconstructivism: How the Brain Constructs Cognition,Oxford, UK: Oxford University Press.Roepstorff, A., Niewoehner, J., and Beck, S. [2010]: ‘Enculturing brains throughpatterned practices’. Neural Netw, 23, pp. 1051-59.Westermann, G., Thomas, M S. C., Karmiloff-Smith, A. [2010]: ‘Neuroconstructivism’, inU. Goswami (ed.), 2010,The Wiley-Blackwell Handbook of Childhood CognitiveDevelopment, Oxford, UK: Wiley Blackwell, pp-.723-48.Karim Thebault and Sean Gryb. Time RemainsAbstract: Even classically, it is not entirely clear how one should understand theimplications of general covariance for the role of time in physical theory. On one popularview, the essential lesson is that change is relational in a strong sense, such that all thatit is for a physical degree of freedom to change is for it to vary with regard to a secondphysical degree of freedom. This implies that there is no unique parameterization of timeslices, and also that there is no unique temporal ordering of states. Furthermore, itimplies a fundamentally different view of what a degree of freedom actually is -- sincesuch parameters can no longer be understood as being free to change and be measuredindependently of any other degrees of freedom. On this first view, it should be no greatsurprise that when the equations of a classically generally covariant orreparameterization invariant theory are canonical quantized one arrives at a timelessformalism -- since in essence this facet is already implicit within the classical theory.Both classically and quantum mechanically, the canonical observables that faithfullyparametrize the true degrees of freedom of the theory in question are those whichcommute with the relevant Hamiltonian constraints and, both classical and quantummechanically, these perennials cannot by definition vary along a dynamical trajectory.On a second view, such a radical variant of relationalism with regard to change and timeis taken to go a little too far. The lessons for time drawn from general covariance aremore subtle, and imply that while duration is relative, both the change in a given degreeof freedom, and the ordering of such change along a dynamical history, can beunderstood as absolute -- it is only the labelling of change that is arbitrary not thechange itself. Such an interpretation of general relativity is consistent with generalcovariance because it can be maintained via the addition of only a single arbitrary timeparameter, corresponding to the minimal temporal structure necessary for a successionof observations to be represented. And it is consistent with the formalism of the theorysince ordering structure is explicitly encoded within the positivity of the lapse multiplierthat appears within the canonical action.This `Machian' approach to the classical theory of gravity can be used to motivate a`relational quantization' methodology, such that it is possible to conceive of dynamicalobservables within a theory of quantum gravity. Previous work has shown that for simplereparameterization invariant models, a specific, trivial expansion of the original phasespace in terms of a variable (and its canonical conjugate) that acts as an arbitrary clockvariable, can allow us to retain change and temporal ordering at a quantum level. Herewe will offer both conceptual motivation and formal arguments towards the viability ofapplying relational quantization to the full theory of relativity, and in doing sodemonstrate connections between unimodular gravity and the 3D conformally invariant`shape dynamics' reformation of general relativity.Jonathan Bain. Pragmatists and Purists on CPT Invariance in RelativisticQuantum Field TheoryAbstract: Pragmatist approaches to relativistic quantum field theories (RQFTs) trademathematical rigor for the ability to formulate non-trivial interacting models (examplesinclude the textbook Lagrangian approach, and Weinberg's approach). Purist approaches

to RQFTs trade the ability to formulate non-trivial interacting models for mathematicalrigor (examples include the axiomatic and algebraic formalisms). Philosophers of physicsare split on whether foundational issues related to RQFTs should be framed withinpragmatist or purist approaches. This essay addresses this debate by viewing it throughthe lens of a specific result that many authors have claimed is unique to RQFTs; namely,the CPT theorem. I first consider Greenberg's (2002) claim that, within the puristaxiomatic approach, a violation of CPT invariance entails a violation of restricted Lorentzinvariance. I then review a critique of Greenberg within the context of "causalperturbation theory", which seeks to establish a mathematically rigorous foundation forthe perturbative techniques that underlie pragmatist approaches (Dütsch & Gracia-Bondía 2012). I then assess the extent to which causal perturbation theory can beviewed as an attempt to reconcile pragmatism and purity.In the axiomatic approach, the proof of CPT invariance is restricted to Wightmanfunctions (vacuum expectation values of unordered products of fields). This is a glaringrestriction since there are no non-trivial interacting models of the Wightman axioms.Greenberg (2002) considers a time-ordered Wightman function (or "τ-function") τdefined byτ(x1, ..., xn) ≡ ∑p θ(tp1, ..., tpn)W(xp1, ..., xpn) (1)where W(x1, ..., xn) ≡ 〈Ω|φ(x1)...φ(xn)|Ω〉 for vacuum state |Ω〉 and field φ(x), andthe Heaviside function θ enforces time ordering. In pragmatist approaches to RQFTs,interactions are treated by calculating the S-matrix in terms of a perturbative expansioninvolving such τ-functions. Greenberg demonstrates that if W violates CPT invariance,then τ violates restricted Lorentz invariance. He concludes that if Lorentz invariance ofan interacting RQFT requires restricted Lorentz invariance of τ-functions, then "...if CPTinvariance is violated in an interacting quantum field theory, then that theory alsoviolates Lorentz invariance" (2002, pp. 231602-1). This demonstration has been veryinfluential in the physics literature, since it suggests a test for violations of Lorentzinvariance via experiments that measure CPT violation. However, it fails for two reasons:(a) Since non-trivial interacting Wightman functions do not exist, neither do non-trivialinteracting τ-functions.(b) The product of a Heaviside function and a Wightman function does not, in general,exist; thus τ-functions, regardless of whether they are free or interacting, do not, ingeneral, exist.Claim (b) follows from the treatment of Wightman functions as distributions, combinedwith the fact that pointwise multiplication of distributions is not in general well-defined.Dütsch & Gracia-Bondía (2012) address (b) by adopting the regularization schemeassociated with causal perturbation theory. This scheme allows one to rigorously defineextensions of distributions to underwrite products like (1). Causal perturbation theory,however, is more than just a regularization scheme: It is an approach to RQFTs in whichthe basic objects are time-ordered products of fields (TOP) (e.g., Gracia-Bondía 2006).The TOP are required to satisfy a set of axioms, and then used to construct a formalseries expansion of the S-matrix from which interacting fields are generated. Suchinteracting fields are mathematically well-behaved, provided the TOP are suitablyrenormalized, and physically meaningful, provided an adiabatic limit exists.The primary goal of the current essay is to place causal perturbation theory within theframework of pragmatist and purist approaches to RQFTs. On the surface, causalperturbation theory combines aspects of both purist and pragmatist approaches. I willargue that, ultimately, it belongs to purist approaches; however, philosophers of physicsshould pay it heed for its willingness to engage with the perturbative techniques ofpragmatism.

Elizabeth Irvine and Sean Roberts. Learning from computer and humansimulations: The case of language evolutionAbstract: A major methodological problem in complex systems research is how tosupport theoretical claims that are based on clearly simplistic models and simulations,and where there is limited experimental access to the target system. Here, we explorethe case study of language evolution in order to outline how the notion of robustness canprovide warrant for theoretical claims based on these methods. This stems from currentresearch in this area and from relevant areas of philosophy science (e.g. Wimsatt 1981,Weisberg 2006).A central question in the field of the evolution of language is whether linguistic structureis mainly a product of domain-specific genetic constraints, or of cultural transmission.However, the cultural evolution of language is difficult to study because there is littledirect evidence available. Computer simulations make it possible to study of thedynamics of cultural evolution, but often include highly simplified mechanisms oflearning. Human simulations (replacing computer agents with human subjects) obviouslyuse a realistic learning mechanism, but present the problem that test subjects alreadyknow natural languages. Recent work suggests that linguistic structure emerges initerated learning contexts with selection pressures to be learnable and expressive (Kirby1999, Hurford 2000, Smith, Kirby & Brighton 2003), but problems with simulationmethods make it difficult to justify theoretical claims about the evolution of language.One crucial point here is that simulations of cultural transmission are not intended asinstantiations of human language learning, nor the evolution of a real language. di Paoloet al. (2000) suggests that simulations should be seen as ‘opaque thought experiments’that reveal new principles or constraints relating to a theory, where the theory here isabout the general dynamics of cultural transmission. Evaluations of model-target ‘fit’,related to notions of robustness (Levins 1966, Wimsatt 1981, Weisberg 2006), play asignificant role in linking these very general principles to real cases of languageevolution.Robust properties are those that are consistently found across a set of different models,suggesting that they are ‘important’ properties that derive not from incidental features ofthe models, but from a common structure found across all the models. As linguisticstructure reliably emerges from a range of models (e.g. agent-based and mathematical),and computer and human simulations, even when significant variables are changed (e.g.learning mechanisms, prior knowledge of natural language), this is a robust property ofthese models.Going further than Weisberg’s (2006) analysis of robustness, we suggest that the verydiscovery of robust properties across a range of computational models and simulationsand (importantly) human simulations, lend support to theoretical claims. These are thatreal cases of language evolution share the same structure found in computationalmodels, and human and computer simulations. Robustness analysis that spans bothmodels and experimental work can therefore together give support for theoretical claimsbased on individually problematic methods.Matteo Colombo. Deep and Beautiful. The Reward Prediction Error Hypothesisof DopamineAbstract: Every year the web magazine Edge invites prominent scientists, philosophers,journalists, artists, historians, and so on, to answer a grand question. For 2012, thequestion was: “What is your favourite deep, elegant or beautiful explanation?” One ofthe few responses that pointed to a hypothesis from the cognitive sciences was putforward by Terrence Sejnowski, one of the world’s most respected computationalcognitive neuroscientists. Sejnowski’s favourite deep, elegant or beautiful explanation is

(Fahrbach 2011) according to which there is a degree to which even a novel predictivesuccess does/doesn’t count as ‘impressive’.I further consider the case of Kirchhoff’s theory of diffraction. Drawing on Brooker(2008) I present a response to the claim (Saatsi and Vickers 2011) that one of theworking posits in the Kirchhoff derivation is ‘radically false’. This leads to a more generallesson: the divide et impera realist is able to respond to challenges from the history ofscience simply by identifying *some* of the *idle* posits in the derivation of aprediction. If this is right the realist may be able to sidestep the concern—emphasised inparticular by Kyle Stanford—that we cannot prospectively identify the working posits of atheory.REFERENCESBrooker, G. (2008): ‘Diffraction at a single ideally conducting slit’, Journal of ModernOptics 55(3), pp.423-445.Fahrbach, L. (2011): ‘Theory Change and Degrees of Success’ Philosophy of Science78(5), pp.1283-1292.Laudan, L. (1981): ‘A Confutation of Convergent Realism’, Philosophy of Science 48(1),pp.19-48.Saatsi, J. and Vickers, P. (2011): ‘Miraculous Success? Inconsistency and Untruth inKirchhoff’s Diffraction Theory’, British Journal for the Philosophy of Science 62(1), pp.29-46.Stefan Heidl. Abstraction and the Explanatory Autonomy of EconomicsAbstract: Explanatory autonomy describes the idea that high-level sciences can provideexplanations, which are independent from more fundamental sciences. A science can beexplanatory autonomous if it has its own standards of explanatory relevance, whichallow it to abstract from details of the world, that a more fundamental science wouldneed to consider.Some economists (for example[Gul and Pesendorfer(2008)]) argue that economics isexplanatory autonomous from psychology. They are opposed to the sub-discipline ofbehavioral economics, in which results of cognitive psychology are integrated intoeconomic theory [Camerer(1999)]. They argue against an integration of psychology intoeconomics because there is a specific economic aspect of the world, which can beanalyzed while abstracting from all non-economic details.This aspects is the contributionof instrumental rationality to social behavior. Instrumental rationality is seen as animportant cause of social behavior because people in all kinds of situations are facingproblems in which they have to effect trade-offs between goals which cannot all besatisfied at once. A typical economic explanation takes preferences over the goals ofagents as given and puts these agent in an economic environment that is characterizedby economic variables like the number and prices of goods and explores the effectdifferent values of these variables have on choices of agents. Only properties which arerelevant for instrumentally rational choice are included in models of economics.Economics also abstracts from the analysis of non-choice processes. It assumes thatpreference- formation will yield stable and consistent preferences and leaves the analysisof this process to psychology. Adding properties, which are unrelated to instrumentalrationality, or models of non-choice process into economic theory would mean to addirrelevant details.I want to question that the argument via the method of abstraction is sufficient toestablish an autonomy of economic from psychology. Empirical anomalies of economictheory, which were discovered in studies of cognitive psychologists, show thateconomic’s level of abstraction is not appropriate for the analysis of choice behavior. Anexample is the phenomenon of framing, which shows that the way a decision problem ispresented has an influence on the behavior of agents. [Tversky and Kahneman(1986)]

show that the presentation of a decision problem determines a reference-point accordingto which outcomes are coded as either losses or gains and that people treat lossesdifferently from gains. Jointly this implies that people can choose differently inmathematically equivalent problems because of the way in which these problems aredescribed. Framing causes an anomaly of economic theory because economics abstractsfrom the details of the situation, which influence choice behavior although they areirrelevant from the perspective of instrumental rationality. These results show thateconomics is not allowed to abstract from the process of preference-formation becausepart of the process of preference-formations happens in the early stages of a decisionproblem. Behavioral economic theories accommodate this fact by modeling how peoplecreate a subjective representation of the decision problem. Behavioral economicschanges the level of abstraction of economic theory to allow economics to model typicalchoice situations. It models processes from which standard economics abstracts andincludes additional properties like reference points.This shows that the argument from the method of abstraction is insufficient to establishthat economics is autonomous from psychology. Psychology can lead to a change in thelevel of abstraction of economics by demonstrating that economics is abstracting fromdifference-making details.References[Camerer(1999)] C. Camerer. Behavioral economics: Reunifying psychology andeconomics. Proceedings of the National Academy of Sciences, 96:10575–10577, 1999.[Gul and Pesendorfer(2008)] F. Gul and W. Pesendorfer. The case for mindlesseconomics. In A. Caplin and A. Schotter, editors, The Foundations of Positive andNormative Economics, pages 3–43. Oxford University Press, 2008.[Tversky and Kahneman(1986)] A. Tversky and D. Kahneman. Rational choice and theframing of decisions. The Journal of Business, 59(4):S251–S278, 1986.Steven French. Is Monism a Viable Option in the Philosophy of Physics?Abstract: Within metaphysics monism has come under sustained attack on a number ofgrounds. It is argued, for example, that it cannot capture the way permutations operatein statespace; that it cannot avail itself of the standard Lewisian account of intrinsicality(see Sider, T. ‘Against monism’ Analysis, 67 (2007) 1-7); that its apparent austerity ispaid for in ideological profligacy (see Schaffer, ‘Why the World has Parts: Reply toHorgan & Potrč’ forthcoming) and that it violates the ‘More Bang for the Buck’ principle(posit as few fundamental entities as possible to ground as many derivative entities aspossible). It is further argued that whereas the metaphysical pluralist can ‘piggyback’ hermetaphysics on the relevant physics and thereby give a detailed grounding story aboutthe relationship between the appearances and certain fundamental facts, the monist ishampered by the nature of these facts as ‘sub-world features’ which means that shecannot accommodate in terms of an equivalently detailed grounding story (Sider, T.,‘Monism and Statespace Structure’, in Robin Le Poidevin, ed., Being: Developments inContemporary Metaphysics CUP (2008), pp. 129–150).I shall argue that by drawing on the nature and role of laws and symmetries in modernphysics, the monist can respond to all of these concerns. Thus, permutations areaccounted for via Permutation Invariance, to be regarded as a fundamental feature of‘the world’, regarded monistically; the Lewisian account is in trouble anyway in thecontext of modern physics (see S. French & K. McKenzie, ‘Thinking Outside the(Tool)Box: Towards a More Productive Engagement Between Metaphysics and Philosophyof Physics’ The European Journal of Analytic Philosophy 8 (2012) 42-59); that theideological profligacy can be reduced with an appropriate understanding of therelationship between laws and properties; and the MBftB principle can be satisfied via anappropriate iterative understanding of the relationship between physics andmetaphysics. More importantly, perhaps, I shall argue that with a clear picture of the

elevant physics to hand, the monist can give just as detailed a ‘grounding story’ as thepluralist.I shall conclude with two observations: first (commonplace) that by paying attention tothe relevant physics, metaphysical debates might thereby be enhanced; second (moreinteresting perhaps) that the relevant physics might be seen as a compatible with eitherpluralism or monism but that there is a sense in which it straddles both positions.Marc Ereshefsky and Thomas Reydon. Scientific Kinds: A Critique of HPC Theoryand a Proposal for an Alternative AccountAbstract: In the past decades Boyd’s Homeostatic Property Cluster Theory (HPCTheory) has become the received view of natural kinds in the philosophy of science, inparticular in philosophy of biology. In this paper, we argue that this enthusiasm for HPCTheory is unwarranted and propose an alternative account of natural kinds.We criticize HPC Theory by pointing to what we believe is a fatal flaw with the theory: onthe one hand it neglects many kinds highlighted by scientific classifications, while on theother hand including kinds not recognized by science. One aspect of this problem is thatHPC Theory imposes overly prescriptive constraints on what counts as a natural kind byholding that natural kinds must be groups of entities that share a cluster of projectableproperties sustained by homeostatic causal mechanisms. Many successful researchprograms in science, however, offer classifications that do not meet these prescriptions.By way of example, we discuss three groups of such kinds: non-causal kinds, functionalkinds and heterostatic kinds. The other aspect of the problem (that HPC Theoryrecognizes kinds not recognized by science) is due to the theory’s focus on similarity asa defining criterion of kinds. On HPC Theory, any case in which a number of propertiesrepeatedly cluster together due to underlying factors is an instance of natural kind thatcan feature in inferential statements. An example is biological species. According toBoyd, species need not be historically defined. But by recognizing non-historicallydefined species as respectable scientific kinds, Boyd’s theory allows for kinds that thetwo major approaches to biological systematics, Cladistics and Evolutionary Taxonomy,do not recognize.The root of the problem is that HPC Theory assumes that all scientific classificationsprincipally feature in inferential practices and should capture similarity clusters, anassumption which fails to acknowledge that classifications in science can also havedifferent aims. Our search, thus, is for an account that better recognizes the diversity ofepistemic aims scientists have for constructing classifications.We develop our alternative account by using the notion of a classificatory program, i.e.,that part of a discipline that produces a classification to serve the particular aims of thatdiscipline. It has three parts: sorting principles, motivating principles, and theclassification itself. Sorting principles sort the entities under consideration into kindswithin a classification. Motivating principles justify the use of these sorting principles andin doing so embody the program’s specific aims for positing a classification. The problemwith HPC Theory is that it recognizes only a limited scope of sorting and motivatingprinciples. A more naturalistic account of kinds that better reflects scientists’classificatory aims should allow a broader scope of sorting and motivating principles. Butit should also place constraints on which classificatory programs highlight natural kindsin order to avoid the position that natural kinds are simply whichever kinds scientistshappen to recognize. We propose and explicate three such constraints: internalcoherence, empirical testability and progressiveness. Natural kinds are kinds highlightedby classificatory programs that meet these criteria.

Jonathan Birch. Gene mobility and the concept of relatednessAbstract: Few concepts are as central to the study of social evolution as the concept of‘genetic relatedness’, yet the notion is notoriously difficult to pin down precisely. Mosttextbooks, in line with Hamilton’s original (1964) presentation of inclusive fitness theory,introduce relatedness as an intuitive measure of genealogical kinship: for example,under diploid genetics, relatedness is one half between full siblings, one eighth betweenfirst cousins, and so on. But in formal work on social evolution, relatedness is morecommonly interpreted (following Grafen 1985) as a generalized statistical measure ofgenetic similarity between social partners. Though these ‘intuitive’ and ‘generalized’measures often agree, they come apart when genetic similarity between social partnersis caused by a mechanism that does not rely on shared ancestry.In microbial populations, we now know that cooperative behaviour is widespread, and wealso know of at least one mechanism that generates genetic similarity in the absence ofshared ancestry: horizontal gene transfer (HGT). Moreover, there is reason to think HGTmakes a significant difference to the evolution of microbial cooperation, sincecooperative traits are overrepresented on mobile genetic elements (Rankin et al. 2011).One plausible explanation for this is that gene mobility promotes cooperation by virtue ofits effects on relatedness.We might regard this as one context in which the ‘generalized’ measure of relatednessstraightforwardly triumphs over the ‘intuitive’ measure, but I argue that there is afurther twist in the tale. For I contend that, on closer inspection, HGT demands a yetmore radical revision of our intuitive concept of relatedness. This is because HGT impliesthat we cannot talk of an organism’s genotype simpliciter—only of its genotype at aparticular time. This introduces a temporal aspect to relatedness, and leads us to ask: atwhich stage in the life-cycle should relatedness be evaluated? In particular, is it geneticsimilarity at the time of action that matters to the evolution of cooperation, or geneticsimilarity at the time of reproduction?I argue that, when HGT is at work, neither of these suggestions is correct: the sort ofgenetic similarity that matters is diachronic genetic similarity between actors at the timeof action and recipients at the time of reproduction. I argue for this claim by means of asimple model. The model shows that, as long as there is diachronic genetic similarity,altruistic behaviours can evolve even in the absence of synchronic genetic similarity atthe time of action or at the time of reproduction. The upshot is that, in microbialcontexts, we should further revise our intuitive concept of genetic relatedness to reflectthe special importance of diachronic genetic similarity.ReferencesGrafen A (1985) A geometric view of relatedness. Oxf Surv Evol Biol 2:28-90Hamilton WD (1964) ‘The genetical evolution of social behaviour. J Theor Biol 7:1-52Rankin DJ, Rocha EPC, Brown SP. What traits are carried on mobile genetic elements andwhy? Heredity 106:1-10Jürgen Landes and Jon Williamson. Scoring Rules for Objective BayesianismAbstract: Objective Bayesianism, developed in [1], is based on three norms a rationalagent ought to adhere to when forming beliefs on a propositional language L. Thesenorms are require that1) beliefs should be probabilities (Coherence),2) beliefs should be calibrated to the agent's evidence of physical probabilities(Calibration),3) otherwise beliefs should equivocate between the sentences of L (Equivocation).The probability norm is usually justified by a Dutch Book argument which assumes that a

ational agent avoids sure loss. The calibration norm, also known as the PrincipalPrinciple, is normally justified in a slightly different betting scenario with repeated bets.Here, the agent's rationality is capture by expected loss avoidance. Finally, theequivocation norm, which amounts to Shannon entropy maximization among thecalibrated probability functions, can also be justified by considering a betting scenario. Inthis third scenario an agent aims to avoid worst case expected default loss.In this paper will examine, how a single justification for all three norms can be given interms of worst case expected default loss avoidance. This loss is computed employingscoring rules, which have become a popular method to assess probabilistic forecasters.We will show that the, on the face of it, most natural scoring rule coupled with worstcase expected default loss avoidance make an agent obey the first two norms andmaximize generalized Shannon entropy among the calibrated probability functions.Two classes of scoring rules for the assessment of probabilistic forecasters have receivedparticular attention in the literature. Firstly, the classes of strictly proper scoring ruleswhich give the best score to the forecast which matches the physical probability. Thesecond class are the local scoring rules. A scoring rule is local, if and only if theassociated loss in case event E obtains only depends on the forecasted probability of Eobtaining. We will see that there does not exist a local and strictly proper scoring rule onthe class of coherent and non-coherent forecasters.References[1] Jon Williamson. In Defence of Objective Bayesianism. Oxford University Press, 2010.Paul Dicken. Normativity and the Base-Rate FallacyAbstract: Recent literature in the scientific realism debate has been concerned with aspecies of statistical fallacy that appears to undermine both realist and empiricistarguments regarding the truth of our scientific theories. Howson (2000) for exampleargues that while the likelihood of a true theory generating successful predictions is high,one cannot infer the corresponding likelihood of a successful theory being true withouttaking into account the base-rate probability of any arbitrary theory being true.Similarly, Lewis (2001) complains that while the history of science may furnish us withnumerous examples of initially successful theories being proved false, one may not inferthe probability of our current theories going astray without knowing the underlyinglikelihood of a false theory generating successful predictions. Magnus and Callender(2004) conclude that this tendency to ignore base-rates explains the sense ofintractability that pervades recent discussion in the scientific realism debate: that whilefurther case studies may increase or decrease the respective probabilities, all suchconsiderations will be swamped by the underlying probability of any arbitrary scientifictheory being true – something over which realists and empiricists can only tradeintuitions.The result has been a reconceived focus for the scientific realism debate. In the view ofMagnus and Callender, we should abandon the traditionally wholesale arguments forscientific realism that rely upon sweeping statistical claims in favour of a series of retailarguments targeting specific cases. Psillos (2009) argues that the scientific realist shouldbe concerned with the likelihood of an individual scientific theory being true given itspredictive success; while Saatsi (2010) suggests that we should prefer those argumentsfor realism that are concerned with the distinctive content of the inference in question,rather than in terms of it generic form.This narrowing of focus has brought with it a renewed emphasis upon the history ofscience that has undoubtedly enriched the debate. Yet it also raises deeper concerns,since as the argument over scientific realism becomes increasingly tied to the specifics ofindividual theories, it becomes increasingly difficult to see what the distinctively

philosophical contribution of that debate might be. Certainly, one would no longer beable to conclude anything from the general evaluation of the patterns of reasoningexemplified in scientific practice if one also considers such reasoning to be fundamentallycontext-specific. The scientific realism debate threatens therefore to be become justanother aspect of the first-order scientific deliberations with which it was initiallyconcerned – a conclusion that some critics may welcome, but one that is in tension withthe explicit intentions of those philosophers who take themselves to be reformulating thescientific realism debate, rather than abandoning it altogether. I illustrate this situationwith respect to the so-called no-miracles argument in favour of scientific realism, andconsider some of the proposals for restricting the scope of such reasoning in a way thatavoids the risk of statistical error. I conclude however that attention to the base-ratefallacy effectively marks the end of the scientific realism debate as currently understood.Rachel Cooper. Defining “mental disorder”: Problems with using conceptualanalysis to understand “human kind” terms.Abstract: A sizeable literature in the philosophy of science seeks to employ conceptualanalysis to yield an account of our concept of “mental disorder”, or more broadly,“disorder”. Such work depends on the idea that conceptual analysis can be used to maketacit knowledge explicit. I discuss three reasons to doubt that this research project willmeet with success. My concerns have ramifications beyond debates about the definitionof “mental disorder”, and will extend to many projects that seek to use conceptualanalysis to understand key terms used in the human sciences (such as “woman”,“black”, “disabled”).Worry 1. Our intuitions about the disease-status of particular conditions are malleable.They are historically and culturally shifty, and have been manipulated by interestedparties (eg marketing by pharmaceutical companies). As a consequence, an “intuition”that depression is a disorder may be no more trustworthy than the belief that Heinzmake the best baked beans.Worry 2. A basic assumption of traditional conceptual analysis is that, when it comes tocommonly understood words, competent English speakers should be able to say howthey would describe various situations. An analysis of the debate over whether Obamashould count as the first Black President reveals that when it comes to “human kind”terms this assumption is problematic. Competent English speakers often hesitate to passjudgment as to whether particular people count as “black”, or “women”, or “disordered”.Worry 3. Recent disagreements about the definition of “mental disorder” to be includedin the new edition of the main classification used by psychiatrists, DSM-5, reveal that theextension of “mental disorder” is currently indeterminate. It is currently not fixedwhether a dysfunction that is not harmful (eg possibly certain cases of Asperger’s)should be counted a disorder. Joseph LaPorte (2004) Natural Kinds and ConceptualChange (CUP) suggests that such cases are not unusual. Many scientific terms arevague, and are made precise only when disputes make the vagueness explicit.Our intuitions about “disorder”, “mental disorder”, and by extension many other humankind terms, are compromised, inaccessible, and indeterminate. Projects that seek to useconceptual analysis to provide a correct descriptive account of such concepts are thuslikely to fail. On a more positive note, I suggest that progress can be made once werecognize the source of these problems. Definitional disputes about human kind termsare best understood as veiled moral and political arguments about how we think humansshould live, rather than disputes about what is or is not the case.Roberto Fumagalli. Economic Models and Neuro-Psychological Ontologies:Three Challenges to RealisticnessAbstract: Economic models of choice are often criticized for failing to provide accuraterepresentations of the micro-causal and mechanistic underpinnings of people’s decisions.

In recent years, several authors (e.g. Loewenstein et al., 2008, and McCabe, 2008) haveargued that neuro-psychological findings enable economists to overcome this allegedshortcoming. Some (e.g. Camerer, 2007, and Rustichini, 2009) have gone as far as tourge economists to replace their ‘as if’ representations with more detailed accounts ofthe neuro-psychological substrates of choice. In such a context, various leadingresearchers (e.g. Camerer, 2008, and Glimcher, 2010, ch.4-6) advocate a realisticinterpretation of the currently best available neuro-psychological models of choice. Theidea is that the sub-personal entities posited by such models: (1) have preciselyidentifiable counterparts (e.g. anatomically localized neural areas) in the human neuropsychologicalarchitecture; (2) possess the properties (e.g. specific neuro-biologicalfeatures) these models ascribe to them; and (3) have their properties characterized bythose models accurately.In this paper, I examine recent research on decision-making and argue that the availableevidence does not license this realistic interpretation of neuro-psychological models ofchoice. The contents are organized as follows. In Section 1, I explicate such realisticinterpretation and contrast it with the ‘as if’ interpretation many economists give to theirmodels of choice. In Sections 2-4, I adopt a more critical stance and put forward threearguments to demonstrate that neuro-psychological modellers are not in the epistemicposition to substantiate their realistic interpretation. The first argument builds on thecurrent paucity of precise identity and persistence conditions for the sub-personal positspostulated by neuro-psychological modellers. My second argument targets the limitedevidential reach of the tools used to collect and interpret the data on which neuropsychologicalmodels of choice are based. The third argument criticizes neuropsychologicalmodellers for failing to employ stringent methodological criteria toconstrain the number and types of posits they postulate. In Section 5, I illustrate someimplications of my three arguments for both theoretical debates about scientificrepresentation and the pragmatics of modelling in the decision sciences.My three-fold challenge provides choice modellers with conceptual, evidential andmethodological reasons for regimenting the interpretation of their models. In articulatingthis challenge, I address various issues that are frequently discussed in the literature onscientific modelling and representation. I shall devote particular attention to debatesconcerning: the ontological status of the sub-personal entities posited in distinct decisionsciences (see e.g. Hausman, 1998, and Mäki, 2005, in economics); the resemblancerelations that supposedly connect scientific models to the investigated phenomena (seee.g. French, 2003, and van Fraassen, 1994, on isomorphism); and what conditionsmodels have to satisfy for being regarded as realistic representations of their targetsystems (see e.g. Contessa, 2007, and Giere, 2004).Luke Glynn. Ceteris Paribus Laws and Minutiae Rectus LawsAbstract: Special science generalizations admit of exceptions. There are, moreover,*prima facie* difficulties in seeing how non-exceptionless generalizations can supportcounterfactuals, entail objective chances, underwrite relations of causation andexplanation, and play other aspects of the *law role*.In the literature, the notion of a non-exceptionless 'law' is often equated with that of a*ceteris paribus* 'law'. *Ceteris paribus* laws are generalizations that hold only undernormal, or even ideal, conditions. The generalizations of the special sciences hold only*ceteris paribus* because such sciences characterize only limited domains (e.g.biological, meterological, or economic domains), and interference from factors outsidethese domains is possible. Special science generalizations hold only where there is nosuch interference, or where it doesn't make a (significant) difference.Productive philosophical effort has gone into distinguishing various types of *ceterisparibus* law, and to investigating to what extent 'laws' of these various stripes play the

law-role. This is important work. However, I argue that there is another category of nonexceptionlessgeneralization that has often not been properly distinguished from *ceterisparibus* laws: namely, (what I call) '*minutiae rectus* laws'. The Second Law ofThermodynamics is an example of the latter.The Second Law is not obviously a *ceteris paribus* law, but it is well known that itadmits of exceptions. Given an initial low-entropy state of an isolated system, it ispossible though very unlikely that the microstate should be one that leads, by thefundamental dynamic laws, to a later state of even lower entropy. Such an exception tothe Second Law has nothing to do with the violation of any *ceteris paribus* clause.Specifically, it has nothing to do with interference from outside the domain that theSecond Law applies to, or with the failure of any of its idealizations to obtain. Evenassuming an ideal isolated system, the Second Law may be violated just as aconsequence of certain unlikely microphysical realizations of the system's initialthermodynamic state.This type of exception is one of which many special science generalizations admit,including many that have *ceteris paribus* clauses. Rather than having to do with theviolation of a *ceteris paribus* clause (due to influences from outside the domain towhich the generalization applies), this type of exception is a result of the multiplerealizability of the properties that the generalization relates. I call generalizations thatadmit of this sort of exception '*minutiae rectus* laws': laws that hold only when theproperties that they relate are realized in the right microphysical way.If special science generalizations hold only *minutiae rectus*, then (I argue) this poses aproblem for their ability to play the law role in a way that their *ceteris paribus* naturemay not. I argue that the best prospect for defending the view that special sciencegeneralizations are nevertheless laws is to argue that such generalizations are not, afterall, *minutiae rectus* generalizations, but rather are probabilistic laws. I explore theprospects for such an argument.Juha Saatsi. Realism, Explanatory Indispensability, and Ontic Accounts ofExplanationAbstract: Various arguments in philosophy of science draw a link between the(indispensable) explanatory role of a theoretical belief (or posit), and realism regardingthat belief (or posit). A paradigmatic example of such an argument is theIndispensability Argument formathematical realism.Recent debate around the (explanatory) indispensability argument has revolved aroundthe question of whether mathematics plays a genuine explanatory role in science. (e.g.A. Baker, ‘Mathematical explanation in science’, BJPS, 2009) This argument turns onshowing ''that reference to mathematical objects sometimes plays an ‘explanatory role’in science.''It is remarkable that in this debate the key notion of 'explanatory role' of mathematicshas not been analysed in any detail in relation to existing accounts of scientificexplanation. The contribution of this paper is to urge the importance of doing this, and tomake a start in this (considerable) task.I will begin by recalling some useful distinctions in the explanation literature, such as thedistinction between epistemic, ontological, and pragmatic accounts of explanation.(Salmon, 1984) Focusing on the ontological tradition, I will then draw in general terms anovel distinction between ontologically committing 'thick', and ontologically peripheral'thin' explanatory roles.

Roughly speaking: A ‘thick explanatory role' is played by a fact that bears an onticrelation of explanatory relevance to an explanandum. A 'thin explanatory role' is playedby something that allows us to grasp, or (re)present, whatever plays a 'thick'explanatory role.I use this distinction to rephrase the key point of my earlier criticisms of the explanatoryindispensability argument (‘The enhanced indispensability argument’, BJPS, 2011). I willthen elaborate on this point by presenting the distinction between 'thick' and 'thin' inmore precise terms in the context of specific accounts of explanation.The critical distinction between 'thick' and 'thin' can only be properly drawn, I maintain,in the context of an account of explanation. I will do this, with one or two examples toillustrate, for three 'ontic' account of explanation: (1) the Program Explanation (Jacksonand Pettit 1990); (2) the Kairetic account (Strevens 2008); (3) the counterfactualaccount of explanation of Woodward (2005).In connection with (1) I will recall my response to Lyon (AJP, 2012) who argues thatmathematics' explanatory role in science can be viewed as a 'programming' role, in thesense of Jackson and Pettit (1990). The key issue with Lyon's argument is that it fails toestablish that mathematics plays a thick explanatory role.In connection with (2) I will similarly discuss how Strevens' (2008) ontic account ofexplanation, turning on a difference-making criterion of explanatory relevance, makesroom for genuine mathematical explanations in science in which mathematicsnevertheless only plays a thin explanatory role.Finally, in connection with (3), I will similarly look at Woodward's counterfactual theoryof explanation as another example of an ontic account in which mathematics may beviewed to play an indispensable but thin explanatory role.P. Kyle Stanford. Getting What We Pay For: Changing Incentives and theClosing of the Scientific MindAbstract: Although philosophers of science often disagree widely concerning the scopeand character of the knowledge we obtain from scientific inquiry, they nonethelesstypically agree that the scientific enterprise has steadily become better and better overtime at doing whatever it is that it does so well. I argue, however, that there aresubstantial reasons for thinking that the modern scientific enterprise has insteaddegenerated over the course of its history in at least one crucial respect: its capacity tofoster and develop genuinely novel, revolutionary, or transformative science. Morespecifically, I will argue that the changes to the modern scientific enterpriseindependently regarded as most significant and profound by historians of science haveeach served to make that enterprise systematically more theoretically conservative inthe research that it pursues. As science itself has evolved from the activities undertakenby amateur gentleman-scholars in the earliest scientific societies to those of theprofessionalized scientific communities and research universities of the 19th Century tocontemporary state-sponsored academic science, the social, political, and institutionalorganization of scientific activity has consistently reduced the incentives offered toscientists for pursing theoretical innovation while expanding the incentives forconducting something like what Kuhn called “Normal Science” instead. Perhaps mostimportantly, the shift following WWII to a system of competitive, peer-reviewed fundingfor specific investigative projects has dramatically restricted not only the incentives butalso the freedom available to scientists to pursue genuinely novel, creative,revolutionary, or transformative theoretical approaches, a development furtheramplified, I suggest, by the ongoing expansion of so-called “Big Science”, in whichscientific investigations involve ever-larger numbers of individual researchers and theresearch direction(s) of labs or research groups are ever-more-tightly controlled by the

senior investigators in those groups who bring in the external grants that keep themoperating. I briefly consider whether there are practical alternatives to or modificationsof the existing system of incentives that might better encourage the development ofrevolutionary or transformative science, and I conclude by arguing that the extent towhich we should be troubled by the prospect of increasing theoretical conservatism inscience largely depends on what view we take of the ongoing dispute concerningscientific realism, either in general or as applied specifically to particular scientific fields:if existing scientific theories are “approximately true” and further changes in our beliefswill simply build on them, it seems that we can tolerate or even celebrate the increasedand increasing theoretical conservatism of contemporary science as a prudentinvestment strategy for limited resources, while if we think instead that the furtherchanges still to come in our scientific beliefs will be as radical as those separatingDescartes’ physics from Aristotle’s, Newton’s from Descartes’, and Einstein’s fromNewton’s, such increasing conservatism is far more troubling. Thus, how we incentivizeand distribute resources for different sorts of scientific work turns out to be one of thefew places where deciding whether or not we should be scientific realists should actuallymake a difference to how we conduct scientific inquiry itself.Kyle Scott. Induction and SensitivityAbstract: How are we to describe the difference between a justified and an unjustifiedinductive difference? This has turned out to be a rather difficult question. It is a problemthat is identified by Nelson Goodman’s new Riddle of Induction. I propose some progresscan be made on this subject suggesting sensitivity as a necessary condition for aninductive inference being justified.This has been attempted previously by Peter Lipton (2000). He proposed that aninductive inference is justified only if had it been the case that the conclusion of aninductive inference was false, then the agent would not have drawn that conclusion. Theintuition behind this is that an inductive inference can be justified only if there is aconnection between your evidence and the conclusion such that had the conclusion beenfalse, the premises you would expect your evidence to be different. For example, thereason that it is reasonable to infer that all raven are black when you observe manyblack ravens is because you would not have expected to find only black ravens had thisconclusion been false. Likewise, it is not reasonable to infer that all stars have beenobserved before now just because it is true that all stars you have observed have beenobserved before now, because this is just what you would expect if the conclusion wasfalse. This, unfortunately, has difficulty accommodating inductive inferences with falseconclusions, necessary conclusions, or where there are important factors that the agentis not aware of.I will outline these problems, before offering an alternative proposal that preservesLipton’s intuition while also avoiding the problems that he faced. This is my proposal:(T) An inductive inference with conclusion C is justified for an agent S only if, in theworld that is closest to W where ¬C, S does not believe C, where(i) W is the world that is as S believes the actual world to be;(ii) For any two worlds W1 and W2, W1 is closer to W than W2 iff the propositions in Wthat are false in W1 are less certain for S than the propositions in W that are false in W2.Given the similarity between this proposal and the sensitivity condition that has beenoffered by others as an analysis of knowledge, I will consider whether this proposal facesthe same problems, and argue that these problems can be overcome.Following this I will briefly describe how this proposal can be used to make progress inanother debate in the Philosophy of Science. By adopting my view of induction thescientific realist will be able to show why the pessimistic induction is flawed.

Mauricio Suárez. Deflationary Representation and PracticeAbstract: It has become commonplace that representation in science may not beanalysed, but is a primitive notion (Giere, 2004; Suárez, 2004; Van Fraaassen, 2008).This is a sort of deflationism akin to the homonymous view in discussions regarding thenature of truth. There, we are invited to consider the platitudes that the predicate “true”obeys at the level of practice, disregarding any deeper, or more substantial, account ofits nature. Yet, the motivation for deflationism in these two different areas is arguablyalso distinct. The motivation behind the move towards “primitivism” or, more generallydeflationism, regarding scientific representation is the recognition that representation isfirst and foremost an element of a practice – the practice of model building in science.This recognition explains why the emphasis has moved in recent discussions away fromconsiderations regarding the nature of the representational relation between the objectsthat play the role of sources and targets, and their shared properties. Instead, the focusnowadays is chiefly on considering the activities involved in the diverse representationalpractices across the sciences. This is at the heart of what I have elsewhere called theturn from the analytical towards the practical inquiry (some outstanding instances ofwhich are Knuttila (2009) and the collection of essays in Gelfert (ed.), 2011).But what exactly is it to hold a deflationary view of some concept X? I first define thecontrary view, a substantive one, as any analysis of X, in terms of some property P orrelation R, that accounts for and explains the standard use of X. I then go on tocharacterise a deflationary view of X, in opposition, in three distinct senses, namely: a“no-theory” view, a “minimalist” view, and a “use-based” view. I attend to how thesethree views have played out specifically in the philosophical literature on truth. Finally, Iargue that the key to deflationary accounts of scientific representation, under any senseof “deflationary”, is that representation is not a property of sources, or targets, or theirrelations, but is instead best understood as a set of necessary features of the practice ofrepresenting.ReferencesGelfert, A. (ed., 2011), Model-Based Representation in Scientific Practice, special issue,Studies in History and Philosophy of Science, 42, 1, pp. 251-398.Giere, R. (2004), “How Models are Used to Represent Reality”, Philosophy of Science,71, pp. 742-752.Knuuttila, T. (2009), “Some Consequences of the Pragmatist Approach toRepresentation: Decoupling the Model-Target Dyad and Indirect Reasoning”, in EPSA:Epistemology and Methodology: Launch of the European Philosophy of ScienceAssociation, pp. 139-148.Suárez, M. (2004), “An Inferential Conception of Scientific Representation”, Philosophyof Science, 71, pp. 767-79.Van Fraassen, B. (2008), Scientific Representation, Oxford University Press.Lena Zuchowski. Poincare's chaotic heritageAbstract: Scientists, mathematicians and philosophers working in chaos theory havebeen quick to claim that their field of study is a heritage bequeathed by the great Frenchscientist Poincare (e.g. Smale, 1967; Aubin and Dalmedico, 2002). This idea has beenpicked up by scholars focusing specically on Poincare work as well (e.g. Peterson, 1993;Diacu and Holmes, 1996; Barrow-Green, 1997).A careful perusal of the available literature shows that there seem to be two separateparts to these claims:a) The solutions to the three body problem analysed and constructed by Poincare (1884,1890, 1891, 1899) have been interpreted as the "first example of chaotic behavior in adeterministic system" (Diacu, 1996, p. 67). Accordingly, Poincare has been viewed as`discoverer' of chaos.

) The following quotation from Poincare (1914, p.67) is cited as indicating theawareness of Poincare to an important characteristic of chaos, sensitivity to initialconditions:"[I]t may happen that small dierences in the initial conditions produce very great ones inthe final phenomena. A small error in the former will produce an enormous error in thelatter. Prediction becomes impossible [...]."As Parker (1998) pointed out, attempts to represent Poincare as the discoverer of chaossuffer from the fact that part a) and b) above seem to be treated by Poincare as of littlesignicance. Accordingly, authors like Diacu and Holmes (1996) have to overcome thediculty of explaining why Poincare seemed to have `missed' the importance of chaos. Inthe case of Diacu and Holmes (1996) this is done by a rather speculative appeal toPoincare's suspected conservative character; other authors (e.g. Kellert, 1993) havesought to use more general societal and technological factors as an explanation.I have translated and reviewed the parts of Poincare (1884, 1890, 1891, 1899, 1914)pertaining to parts a) and b) with the aim of evaluating the value of Poincare's heritageto chaos theory. A detailed perusal of these works shows that Poincare was well awareand analysed in detail the possibility of instability as a kind of sensitivity to initialconditions. In this, however, he did not differ from the majority of his contemporaries(e.g. Lissauer, 1999). As such, Poincare's method of analysing the distribution of stableand unstable areas of phase space (associated with part a) above) have indeed beenexported into the branch of chaos theory concerned with analytic functions (e.g. Smale,1967; Nolte, 2010).However, the modern interpretation of chaos covers systems that are both sensitiv toinitial conditions (for reasons clearly going beyond instability) and aperiodic (e.g., asdefined by Kellert, 1992). Poincare's work does not directly inform on this latter aspectof chaos. Accordingly, the close tying of the whole of chaos theory to Poincare appearsto be part of a rhetorical strategy rather than a `natural' development of his scienticheritage.Florent Franchette. The Church-Turing thesis and the limits of cognitivecomputationAbstract: According to the Church-Turing thesis, functions which are computable by aneffective procedure are computable by a Turing machine (TM-computable). This thesishas given rise to two interpretations about the limits of cognitive computation : one is astrong interpretation according to which the thesis establishes what is computable by ahuman being; the other is a weak interpretation according to which TM-computablefunctions are only the functions which can be effectively computed by a human being.Advocates of the weak interpretation claim that human cognition could be able tohypercompute, concept denoting the possibility of computing more than the Turingmachine. In this talk, I will argue against the weak interpretation in providing objectionsagainst the hypercomputation model of the human brain proposed by Siegelmann.Seamus Bradley. The role of rationality in rational choice and theory choiceAbstract: It is often assumed that, given a decision scenario and some appropriatedescription of the agent's beliefs and desires, the principles ofrationality fix or determine how the agent should behave. I argue thatrationality should instead be thought of as merely ruling out what coursesof action the agent should not do. I first apply this idea to cases ofrational choice under ambiguity. I then argue that a similar weakenedversion of what rationality requires makes more palatable the conclusionsof recent "impossibility theorems" for theory choice for example Okasha(2011).

Consider an ideal agent who has the choice between £1 and a bet thatwins £2 if a fair coin lands heads and nothing otherwise. (Assume thatthe agent has a neutral attitude to risk, or otherwise assume that theprizes are modified so as to compensate for whatever aversion to riskthe agent feels). These two options have the same expected value,so the principles of rationality are silent on which of these optionsthe agent should choose. It would still however be irrational forthe agent to choose a bet that paid her less than £2 on a fair coin.I argue that principles of rationality should in fact be permittedto be silent more often than the standard theory allows. That is, insituations of severe uncertainty or ambiguity – situations where theagent doesn't have a good grasp of what expected values can be attachedto what acts – all we can expect our principles of rationality to dois to rule out some obviously bad acts: they needn't rule in some actsas being most choiceworthy. That is, it is not just in cases of equalexpectation that rationality can be silent.I think this weaker conception of what we can expect rationality to do forus is useful in understanding the import of recent work on impossibilitytheorems for theory choice like Okasha (2011). Okasha takes Arrow'sfamous impossibility theorem for voting systems and argues that ananalogous result holds for choice among scientific theories. I don'taim to cast doubt on Okasha's result, but I think it is important tounderstand what the conclusion of the result actually is. I argue thatthe impossibility result does not mean that "anything goes" in theorychoice: there are still some cases where partial theory choice rulescan work. Okasha's result only rules out there being a rule that alwaysdetermines a best theory. There can still be weaker choice rules thatsuccessfully rule out some bad theories. So theory choice can conform tothe weaker standard of rationality I have argued for above. So Okasha'sresult does not show that theory choice is irrational, or that it isnever possible to rationally choose between theories. This is, I think,a return to Kuhn's position: non-rational factors can influence theorychoice when the purely rational theory choice rule is silent. However,this does not mean that theory choice is never rational.Marco Dees. Minimalism About QuantityAbstract: If physics is any guide the most fundamental properties are quantities,properties like mass or charge that come in degrees and are properly represented withnumerical scales. A theory of quantity is an account of what the world must be like,fundamentally, for the mathematical description we find of it in physics to be true.On what I’ll label the Naïve View, quantities are relations to numbers. This view facesformidable problems and should be rejected. The view requires either that one choice ofunit is implausibly privileged as the fundamental relation to numbers, or requires a vastproliferation of fundamental relations that mysteriously march in lockstep. If thefundamental properties are quantities, and quantities are just relations to numbers, weare left with a picture of the world on which nothing has any interesting intrinsic nature.But surely the world is a certain way intrinsically in virtue of which parts of it bearrelations to numbers and in virtue of which things behave the way they do.According to what I call the Standard View, talk of mathematical relations amongmagnitudes is justified because in an appropriate scale the numerical structure of thescale can be used to represent the fundamental physical structure of the quantitiesthemselves. On Brent Mundy’s (1989) theory, for example, in addition to all the firstordermass magnitudes there are two fundamental second-order relations, physicaladditionand physical-less-than-or-equal-to. Hartry Field’s (1980) posits two first-order

elations, mass-between and mass-congruent. Different versions of the Standard Viewagree that we must posit quasi-mathematical primitives like physical-addition or massbetweennessto ground the quantitative structure of mass.We should reject the Standard View in favour of Minimalism, according to whichquantities have no structure independently of that imposed on them by the laws in whichthey appear. Minimalism is an extension of causal structuralism about properties, theclaim that there are no quiddistic facts: fundamental facts involving individual propertieslike . Rather, the fundamental facts are second-order generalizations,purely structural facts like . Minimalism about quantity extends this view and claims that facts aboutquantitative relations between properties are also grounded in facts about howproperties are related by the laws. According to Minimalism, what it is for 2kg mass tobe larger than 1kg mass is just for the properties to play appropriately different roles inthe laws.One reason to favour Minimalism is that the quasi-mathematical relations posited by theStandard View are explanatorily redundant, since we can make sense of the numericalstructure of quantities without them. Another is that proponents of the Standard Viewmust brazenly posit necessary constraints on the distribution of their chosen quasimathematicalprimitive. For example, Field stipulates that for any things x,y,z,w,u,v, ifmass-congruent(x,y,w,z) and mass-congruenct(x,y,u,v) then mass-congruent(w,z,u,v);other versions of the view make similar stitpulations. Unexplained necessary connectionsbetween fundamental facts are mysterious and we should prefer a theory of quantitythat lacks this vice.Mazviita Chirimuuta. Colour Relationism and CategorisationAbstract: In the philosophical debate over the reality (or otherwise) of colour, a keyquestion has been whether certain features of subjective colour experience rule out theidentification of colours with objective physical properties. The debate has concentratedon the uniqueness of hues and opponency relations (Hardin 1988, Byrne & Hilbert 2003).The fact that colour perception is categorical has received relatively less attention withinphilosophy.In this paper I argue that there is a complex lesson to be learned from colourcategorisation. It is not simply that colours can be equated with our categories, and aretherefore subjective; or that categorisation is merely a distraction from the essentialbusiness of colour perception, which is to recover objective physical properties. Rather,categorisation should be thought of as a “warping” of perceptual space which cannevertheless serve the purposes of discrimination and re-identification of objects; yet, atthe same time, certain abstractions of physical stimulus properties made possiblethrough categorical perception serve purposes in communication which are orthogonal toany purely representational goals in perception. Such perceptual abstractions have beenstudied extensively in speech perception and I discuss the relevance of this literature tocolour perception. I argue that any equivocation over the subjectivity or objectivity ofcategorical perception can best be understood in terms of the “narcissism” of sensorysystems (Akins 1996), whereby all perceptual representations are shaped by the variousneeds and interests of the perceiver.I conclude that these considerations lend support to a relationist theory of colour (Cohen2009), and in particular that they help diffuse an important objection to colourrelationism. Relationists define colour according to relations holding between perceivers,objects, and viewing conditions. If one of the relata is altered in a relevant way, itfollows that the colour of an object changes. E.g. if the illumination in the room changesfrom daylight to tungsten lighting, causing a noticeable difference in my colourexperiences, the relationist must say that the colours of the objects in the room have

changed. This result seems to conflict with the stability of colour ascriptions. E.g. beforeand after the illumination change, I will say that the walls in the room are yellow. Yet thefact that ordinary language colour terms refer to quite broad categories helps explain thestability of colour ascriptions despite inter-subjective differences in colour experiences,and variations due to changes in viewing conditions.Akins, K. A. (1996). " Of Sensory Systems and the "Aboutness" of Mental States."Journal of Philosophy 93(7): 337-72Byrne, A. and D. R. Hilbert (2003). "Color realism and color science." Behavioral andBrain Sciences 26: 3-64.Cohen, J. (2009). The red and the real. Oxford, Oxford University Press.Hardin, C. L. (1988). Color for Philosophers. Indianapolis, Indiana, Hackett.David Corfield. What is Casting the Shadows on the Wall?Abstract: At this moment, a revolution in the foundations of mathematics is beingpushed ahead at the Institute of Advanced Studies in Princeton. The program goes byvarious names, reflecting the multiple sources of its ideas: Univalent Foundations,infinity-topos theory, intensional dependent type theory and, the name I shall use here,homotopy type theory. One key motivation is to have a foundational language whichallows much closer contact with the frontiers of mathematical research. The status quohas us rest content with a material set theory, grounded on the predicate calculus, whichis understood to allow the construction of surrogates for any mathematical entity, fromGalois representations to Kahler metrics, however baroque such constructions would beif anyone cared to provide them. Univalent foundations is to provide a much morenatural formalisation of mathematics.One very important aspect for philosophy is that it transpires that (typed) first-orderlogic is merely a projection from this type theory. Unlike in the case of ZFC set theory,where we introduce its axioms on top of an already existing logic, here the type theory isprimary. If we want to recapture first-order logic we may do so by projection out of thealready existing type theory. To the extent that first-order logic has been used in coreareas of philosophy, we should expect ramifications. Indeed, homotopy type theory hasimportant things to say about identity, quantification and modality.In the context of a philosophy of science conference, it is also worth noting thathomotopy type theory allows a much more natural formulation of theoretical physics. Allof the language of modern geometry (principal bundles, orbifolds, connections, curvatureforms, characteristic classes) needed to formulate gauge field theory currently is readilyexpressed there, including higher gauge-of-gauge transformations. Prequantization andquantization processes are equally well captured.In this talk, I will only have time to sketch some of these developments, but I aim toconvince the audience that they are considerable philosophical importance.Andrea Polonioli. Evolution, Rationality and Coherence CriteriaAbstract: Evolution, Rationality and Coherence CriteriaThe contributed paper reconsiders a popular argument in the so-called ‘rationalitydebate’, which revolves around how much irrationality we should ascribe to humancognition (Cohen 1981; Stich 1990; Kahneman and Tversky 1996; Gigerenzer 1996;Samuels, Stich and Bishop 2002). The debate originated as a reaction to the findingsreported by Kahneman, Tversky and other scholars in the heuristics and biases project,which collectively suggest that subjects are prone to commit cognitive biases (Kahneman& Tversky 1979). Among the arguments aimed at rebutting pessimistic interpretations ofpsychological findings, one is based on evolution, and has been traditionally associated

with Quine, Fodor and Dennett. In essence, the argument is that if organisms hadlargely false beliefs, then they would fail to navigate the world successfully. So,evolutionary theory seems to provide us with good reasons to reject the claim thatpeople’s reasoning is largely inaccurate. But here a puzzle arises. On the one hand,psychology suggests that we are prone to cognitive biases. On the other hand, evolutionsuggests we cannot be largely inaccurate in our reasoning. How can we solve it?A common way to solve the problem has been suggested by Stich, who claims that ‘weare safe to assume that the existence of substantial irrationality is not threatened byanything that evolutionary biology has discovered’ (1990, 70). According to thisstrategy, the evolutionary argument is unconvincing, mainly because it rests on anadaptationist view, and does not take into account that inaccuracy might be adaptive.Whereas I am generally sympathetic to many of the points made by Stich (1990), I feelthat an important distinction neglected in the literature might shed new light on therelationship between psychological evidence and evolutionary theorizing. Specifically,Hammond (1996; 2007) has suggested a distinction between two distinct criteria ofrationality. The first strategy for evaluating the judgment is coherence; this evaluatesthe consistency of the elements of the person’s judgment and comprises norms such asconsistency, transitivity, and adherence to the axioms of subjective probability theory.The second strategy is correspondence and appeals to elements external to the person’sjudgment to evaluate this; it is comprised of goals such as accurate prediction,successful exchanges with others, and, ultimately, the organism’s survival andreproductive success.After having introduced the distinction, I show that different philosophers usuallyassociated with the evolutionary argument have in fact appealed to different strategies,and that there are two possible versions of the evolutionary argument for the rationalityof human cognition. I then discuss Gigerenzer, Hertwig and their co-workers' applicationof the distinction between coherence and correspondence criteria (Arkes, Gigerenzer andHertwig forthcoming), suggesting a new way to look at the relationship betweenpsychological evidence and evolutionary theorizing.Dave Race. Idealisation, Inconsistent Objects, and Representational ContentAbstract: Colyvan (PhilStudies, 141(1):115-123, 2008) constructs an indispensabilityargument for inconsistent objects. He argues that idealisations can be conceptualised asthe deliberate introduction of contradictory assumptions, such that a theory posits anobject with inconsistent properties. This is the second of three types of inconsistency heclaims there to be in theories, along with ‘inadvertent’ and mathematical inconsistency. Iwill investigate this conceptualisation of idealisation with a representational contentbased approach and outline how it might be extended to ‘inadvertent’ inconsistency.According to recently developed theories of mathematical scientific representation, thecontents of representations are to be understood as being “purely structural”:representations contain only structural information and are obtained via a structuralmapping from the target system to some model. How this structural content is cashedout depends on the theory in question. Pincock (OUP, 2012) argues that the content is amathematical structure along with a “specification” that supplies the interpretation of themathematical structure and some details on what appropriate mappings can holdbetween the target system the mathematical structure. A rival account, the InferentialConception of the Applicability of Mathematics (Bueno & Colyvan, Noûs, 45(2):345-374,2011), holds that the contents of mathematical scientific representations are puremathematical structures, and that the interpretation of those structures in terms ofphysical systems occurs after mathematical manipulations have been performed.I will argue that both of these accounts dissolve the appearance of idealisations asinvolving (in some cases) the assertion that objects have inconsistent properties. TheMapping Account makes use of further distinctions between types of representational

content, in particular ‘schematic content’, where the idealised part of the mathematics is“decoupled” from its interpretation. In the case of the Inferential Conception the contentof representations is taken to be purely mathematical, thus an object can only be held tohave contradictory properties if they are carried over from the mathematical structure tothe target system through the ‘interpretation step’. I will argue that this does not occur,in a similar way to the Mapping Account, due to the Inferential Conception’s use ofpartial structures.This shows that Colyvan was wrong to think that inconsistent objects due to idealisationsare indispensable in our scientific theories, as such objects were never part of thecontents of our theories. However, this does not address ‘inadvertent’ inconsistency. Apossible instance of this sort of inconsistency is the case of novel predictions. Forexample, the prediction of the positron could have been interpreted as a particle withnegative energy. The authors of the Inferential Account claim that it can avoid thisproblem. The case is less clear for the Mapping Account, as it claims representationalcontent has a physical interpretation that must be ‘decoupled’ from the mathematics.Thus novel predictions might occur from ‘coupled’ mathematics leading to the predictionof inconsistent objects in the content of such theories. To conclude my investigation, Iwill sketch how this might be avoided by the Mapping Account.Ruth Hibbert. In what sense is psychology not a mature science?Abstract: It is often said that psychology is not a mature discipline. Can this claim befilled out in such a way that it is more than just an excuse for some perceived failings inpsychology when compared to a more mature science like physics? Can we saysomething about this immaturity that provides a recommendation for how psychologistsshould proceed, or how philosophers of science can investigate their work?In this paper, I will consider one such way of filling out the claim about immaturity. Inhis 1986 paper “External and Internal Factors in the Development of Science”, DudleyShapere argues that as a science develops, it internalises certain considerations forbelief formation, while others come to be seen as external and non-scientific. In thecourse of the development of physics for example, religious considerations came to beseen as external, while unification of theories from different domains was internalised asa good criterion for theory choice. Which considerations are internalised is contingent; itis something that emerges from the development of the science itself, rather than beingimposed from outside.I will argue that psychology is immature in the sense that it is at an early stage in thisprocess of internalisation of considerations for accepting beliefs. In other words, it doesnot currently have a clear notion of what should count as an explanation, or how weshould decide what would count as a satisfactory answer to many of the questions itraises.In particular psychology, unlike physics, has not internalised unification betweendifferent domains as a criterion. Psychology has multiple ways of categorising andtheorising about many of the phenomena within its purview. This plurality in psychologyis often seen as a failing – as part its immaturity – and unification with other domainsand between domains within the discipline is seen as the cure. Therefore psychologydoes use unification as a criterion, but it is an external consideration because it has notemerged from within the discipline, but is borrowed from other branches of sciencebased on an assumption that what works in one science should work in another. I claimthat this assumption is unwarranted. In fact, the plural nature of current psychologyspeaks against internalising unification. Rather than being seen as a failing, pluralityshould be treated with an open mind. It may be that preserving this plurality willgenerate more fruitful research than imposing unification.

My conclusion is to advocate pluralism in psychology as a research strategy. It may turnout that unification will emerge over time as a consideration worthy of beinginternalised, but this should not be pre-judged. Until and unless it emerges as a morefruitful strategy for psychology than pluralism, pluralism should be embraced by bothpsychologists and philosophers of psychology as no failing at all. Rather than a sign ofimmaturity, it may be the route to maturity.Adam Toon. Virtuous realism?Abstract: Defenders of scientific realism often appeal to reliabilist theories of knowledge(e.g. Goldman 1986). One reason they do so is to argue for the possibility of gainingknowledge through the use of instruments. Put simply, according to reliabilism, thescientist can gain knowledge through an instrument just in case that instrument is, as amatter of fact, reliable. The scientist need not be able to offer reasons for thinking thatthis is the case. In particular, she need not be able to refute the arguments put forwardby empiricists for doubting that instruments provide us with knowledge of unobservableentities.Within contemporary epistemology, however, simple reliabilist theories are often thoughtto face serious difficulties. One such difficulty is that they fail to accommodate theintuition that knowledge is the product of a cognitive ability (e.g. Greco 1999). Tocapture this intuition, virtue epistemologists argue that is not enough for knowledge thata belief-forming process is reliable; instead, it must also be appropriately integratedwithin the cognitive agent, such that her epistemic success may be credited to hercognitive agency (e.g. Greco 2003). Rather than lending support to scientific realism,such accounts threaten to undermine it, by rendering the use of instruments highlyproblematic, since in such cases it appears that knowledge is primarily creditable to theinstrument, not the scientist (Vaesen 2011).Despite its prominence within contemporary epistemology, the consequences of virtueepistemology for the philosophy of science remain little explored. Virtue epistemologistsdiffer both in the way that they understand what it is for a process to be integrated intoan agent’s cognitive character, and in the degree to which they require success to becreditable to the agent. Recently, some have further explicated the relevant notions ofcognitive character and agency by looking to work on extended cognition in thephilosophy of mind (e.g. Pritchard 2010). By drawing on studies in the history andsociology of science, I will ask whether the various conditions proposed by virtueepistemologists are met in typical cases in which instruments are used in science. Bydoing so, I will assess whether these virtue-theoretic approaches to knowledge supportor undermine the case for scientific realism.References:Goldman, A. (1986) Epistemology and Cognition. Cambridge, MA: Harvard UniversityPress.Greco, J. (1999) Agent Reliabilism. Philosophical Perspectives 13: 273-96.Greco, J. (2003) Knowledge as credit for true belief. In M. DePaul & L. Zagzebski (Eds.),Intellectual virtue: Perspectives from ethics and epistemology. Oxford: Oxford UniversityPress.Pritchard, D. (2010) Cognitive ability and the extended cognition thesis. Synthese 175:133-151Vaesen, K. (2011) Knowledge without credit, exhibit 4: extended cognition. Synthese181: 515-529Sorin Bangu. Understanding Scientific UnderstandingAbstract: Is there a relation between scientific explanation and scientific understanding?The question was probably first addressed by Hempel a good while ago in his work on

the ‘covering-law’ model, and then revived by Friedman’s seminal 1974 paper‘Explanation and Scientific Understanding’, which in turn elicited a series of reactions,both criticisms and refinements (most notably by Salmon, Kitcher and Lipton).Against this background, a debate has recently taken place between J. D. Trout and Regt (see Trout 2002, 2005, 2007; De Regt 2004, 2009), who argued con and pro(respectively) the idea that understanding is important to the practice of providingscientific explanations. While acknowledging the importance of what is at stake in thisexchange, in this paper I attempt to show that neither of the parties involved is right –and that their failures are both non-trivial and instructive, thus guiding us toward apotentially better understanding (of the role) of understanding in science. Thus, withregard to the central question, whether the ‘aha!’ “feeling of understanding” (or ‘FU’, deRegt’s abbreviation in his 2009, 588) truly plays a role in science, I submit that bothTrout and de Regt not only misidentify their opponent’s position, and thus mostly speakpast each other, but also – more importantly – fail to support their own positions.More specifically, in my estimation the situation is rather entangled, as follows. Troutsets up initially to defend the genuinely interesting and provocative view that FU is notnecessary in science, but he adduces (otherwise novel and illuminating) evidence (frompsychology) to the effect that it is in fact not sufficient. So, although Trout’s initial goalremains unachieved (I contend), de Regt fails to point this out (actually, he did not seemto notice it), and instead veers the discussion somewhat orthogonally to Trout’sconcerns, by building a case for the existence and significance of what he calls‘pragmatic understanding’. As it turns out, however, this direction is actually right, Ibelieve, and, in agreement with de Regt, I argue that understanding has a pragmaticdimension indeed – yet unfortunately his construal of this pragmatic function ismisguided.The more general upshot of my discussion is not merely a highlight of the shortcomingsof these positions, but rather an attempt to take some steps toward a constructive goal,to begin the articulation of a novel conception of understanding (I will call it ‘pragmaticmotivational’),which either has gone unnoticed (by Trout and others, most notably byHempel himself) or has been misdescribed (by de Regt and, I argue, by Friedman andKitcher as well). Along the way, I also connect the Trout-de Regt debate to relatedpoints raised by other authors, such as Grimm (2006, 2010) and Khalifa (2011, 2012).Phyllis Illari. The Challenges of Information QualityAbstract: Science and society increasingly use information, and exposure to badinformation has made the importance of assessing the quality of information clear toeveryone. But what is information quality (IQ) exactly? While yet to be investigatedseriously in philosophy of science, this question is live in policy, often with respect tohealthcare. So far, our answers to the question have been less than satisfactory. Forexample, in the US, the Information Quality Act (2000) left undefined virtually every keyconcept in the text. The issue of IQ is also important in computer science, but thereattempts to understand and define aspects of IQ are proliferating, rather thanconverging. Current IQ literature offers no settled agreement on answers to at least fourclosely related questions:1. What is a good general definition of IQ?2. How should we classify the multiple dimensions of IQ?3. What dimensions of IQ are there, and what do key features such as ‘timeliness’,‘accuracy’ and so on mean?4. What metrics might one use to measure the dimensions of IQ, bearing in mind thatmore than one metric may be required to yield an overall measure for a particulardimension?What has become clear is that quality of information can only be assessed with reference

to its intended use. Information is timely only if it arrives in time for its designated task,whether or not it has been processed efficiently: ‘Quality has been defined as fitness foruse, or the extent to which a product successfully serves the purposes of consumers ….’(Kahn, Strong, & Wang, 2002, p. 185). More recently, definitions of quality dimensionsin the ISO standard all make reference to a ‘specific context of use’ (ISO, 2008). Oneimportant feature of a context of use, is normal purposes in that context.IQ is of wide-ranging interest to philosophy of science, such as in constraining the modelorganism databases investigated by Sabina Leonelli. This paper focuses on thecommonality between the IQ debate and quality assessment tools (QATs) and evidencehierarchies in medical evidence. What is generally sought is an assessment tool that canbe laid out procedurally, and used by any moderately qualified assessor, independent ofany context of use. But if Cartwright (2007) and others are right that there are multiplepurposes of use for medical evidence, there will be no unitary quality assessmentmethod. This paper argues that Cartwright is indeed right, and approaches to evidencein medicine would benefit from the approaches to quality improvement common incomputer science and business which only succeed by examining the whole processing ofinformation, from gathering, through cleaning and maintaining, to use.ReferencesCartwright, N. Hunting causes and using them, Cambridge University Press, 2007Kahn, B. K., Strong, D. M., & Wang, R. Y. (2002). Information Quality Benchmarks:Product and Service Performance. Communications of the ACM, 45(4), 184-192.ISO. (2008). IEC FDIS Software Engineering - Software Product Quality Requirementsand Evaluation - Data Quality Model (Vol. 25012).Richard Stöckle-Schobel. Concept Learning PluralismAbstract: In his work on concept acquisition, Jerry Fodor (2008) has contributed twoimportant observations that are worthy of discussion independently of his position on thepossibility of concept learning: i) The characterisation of concept learning as a rationalcausalprocess, in contrast to brute-causal acquisition processes, and ii) the firminsistence that the only available rational-causal process is the hypothesis-formation andtesting model.In this paper, I will use Fodor’s characterisation of learning and first show that it isequivalent to Margolis & Laurence’s (2011) three criteria for concept learning: (Change),(Function), and (Content). These criteria are the following:Change: “learning generally involves a cognitive change as a response to causalinteractions with the environment” (Margolis & Laurence 2011, p. 529)Function: “learning often implicates a cognitive system that isn't just altered by theenvironment but [...] has the function to respond as it does” (Margolis & Laurence 2011,p. 529)Content: “learning processes are ones that connect the content of an experience with thecontent of what is learned” (Margolis & Laurence 2011, p. 529)Second, I will show that these criteria allow for a pluralism of concept learningmechanisms, against Fodor’s second insight: Applying the criteria excludes clear cases ofaccidental concept acquisition, but is not sufficient to exclude a range of other proposalsfrom the cognitive psychology literature.To support this position, I will give an example of a concept learning mechanism thatsatisfies the criteria without being based on hypothesis-formation and testing: Margolis &Laurence’s syndrome-based sustaining mechanism model. It is based on the idea thatconcepts are learnt by mediation of a set of beliefs about a given thing that containsbeliefs essential to a concept, which nonetheless aren't constitutive of the concept.These beliefs are acquired by perceptual contact with exemplars of the concept, from

which a ‘syndrome’ sustaining the conceptual content is constructed. A ‘syndrome’, inMargolis & Laurence’s usage, is a set of observable properties of a given thing thatenable one to single the thing out in a given group of objects.The example of Margolis & Laurence (2011) is apt since its background assumptions arereasonably close to Fodor’s own version of Nativism. Thus, the less Fodor has to rejectwithin their proposal for a concept learning mechanism, the less room is left for evadingthe argumentative force of such a proposal.The primary result of this discussion is that Fodor’s characterisation of concept learningallows for a plurality of concept learning mechanisms, hence for Concept LearningPluralism: There are multiple mechanisms of concept learning that can be aligned alongseveral relevant dimensions, such as compliance with the above criteria, target domainof learnable concepts, and ontogenetic activity. I will conclude by giving a shortexposition of the methodological demands of a pluralist position regarding conceptlearning.References:Fodor, J. (2008). LOT2. The language of thought revisited, Oxford University Press,Oxford.Margolis, E. (1998). How to acquire a concept. Mind and Language, 13(3): 347–369.Margolis, E. and Laurence, S. (2011). Learning matters: The role of learning in conceptacquisition. Mind & Language, 26(5): 507–539.John Wigglesworth. Logical LawsAbstract: Laws tell us how things must be. The laws of logic are supposed to beparticularly strong laws, in that they couldn't possibly be broken. And the possibility inquestion is supposed to be the strongest kind of possibility: logical possibility. The lawsof logic hold of logical necessity.We can draw an analogy between the laws of logic and the laws of physics. The laws ofphysics tell us what must hold of physical necessity. According to these laws, forexample, the entropy of isolated systems never decreases, and the speed of light isalways constant. These things could not fail to hold. To do so would be physicallyimpossible. To do so would break the laws of physics. Put another way, these laws holdof physical necessity.If possibility is understood in terms of worlds, laws carve out the space of worlds intoequivalence classes. Each class contains worlds that are possible with respect to oneanother. Those worlds outside of a given class are impossible with respect to the worldsin that class. As there are different kinds of possibility, there are different kinds of laws.The physical laws separate the physically possible worlds from the physically impossibleworlds. The logical laws separate the logically possible worlds from the logicallyimpossible worlds.In this paper, we address the question: What is a logical law? We often refer to logicallaws: the law of excluded middle, the law of non-contradiction, the law of identity.Taking these as examples, it is natural to say that the laws of logic are characterized byparticular forms of sentences. The instances of these forms must, as they are laws oflogic, always be true. We show that this conception of the logical laws, as instances offormulas that are logically true, is mistaken. To do this, we argue that there is aconnection between laws and possibility. More specifically, we argue that there is aconnection between laws and possible worlds.We show that if the logical laws are taken to be instances of formulas that are logicallytrue, then these laws do not carve up the space of worlds correctly. It would allow

worlds that should be considered logically impossible to be logically possible. We alsoshow that another conception of logical law, the conception that takes laws to be singleconclusioninferences, runs into a similar problem. Therefore, we argue that the logicallaws should be taken to be multiple-conclusion inferences. That is, the logical lawsshould be understood as inferences from one set of sentences to another. We concludeby addressing certain issues in the philosophy of logic that may arise from thisconception of logical law.Tomasz Placek. Causal probabilities in GRW quantum mechanicsAbstract: A Humean question as to whether there are irreducible modal factors(powers, dispositions, propensities) has fueled a large part of the debate concerninginterpretations of probabilities. A recent tendency is to investigate how competingprobability interpretations handle probability ascriptions as they occur in a particularscience. A case in point is a controversy concerning probabilities in the GRW version ofquantum mechanics (Ghirardi, Rimini and Weber, Phys Rev D34: 470, 1986): Frigg andHoeffer (Stud Hist Phil Mod Phys 38: 371--389, 2007) provided a (Lewisian) HumeanBest System (HBS) analysis of GRW probabilities, in which modalities are reduced to abalance of some theoretical virtues; they claim the HBS analysis is superior to a modallyirreducible single-case approach to the GRW theory. In sharp contrast to this approach,Dorato and Esfeld (Stud Hist Phil Mod Phys 41: 41-–49, 2010) advocate the notion thatGRW probabilities are best understood as irreducibly modal single-case propensities thatmeasure the objects' power to localize. This stance is a part of these authors' largerproject of conceiving the GRW theory as a fundamental ontology of powers.This talk aims to adjudicate between the two positions by first clarifying modal andspatiotemporal underpinnings of the GRW theory (read as a theory of powers) in a waythat allows for the identification of propensities with weighted well-localized modalities.Second, it address a popular formal objection to propensities by constructing, in thespirit of Muller's (Brit Jour Phil Sci 56 (3):487--522, 2005) causal probability spaces, theappropriate Boolean algebras on which the propensities are defined.We choose the flash ontology, as proposed by Bell (1987), with objects identified withgalaxies of such flashes, which seems to be particularly handy in relativistic version ofthe GRW theory - see Tumulka's (Jour Stat Phys 125 (4) 2006). The alternativedevelopments of configurations of flashes are represented in a branching-timeframework (original non-relativistic GRW) or in branching space-times (relativistic GRW,see Tumulka (Jour Stat Phys 125 (4) 2006) version). Both frameworks yield a natural,though minimal, concept of transitions as pairs of events (in particular, flashes), suchthat the first is causally before the second.Following Muller (ibid) a Kolmogorovian probability space is associated with eachtransition, its base set being identified with a set of possible transitions that arealternatives to the given transition. As a result of modal and spatiotemporal constraints,the probability of a transition of arbitrary length is a function of probabilities of the basictarnstions it involves. Importantly, the function allows for a failure of factorizability,leaving a room for Bell's theorem.The conclusion is that all the three ingredients of the GRW theory understood as anontology of powers can be rigorously represented: irreducible modality, spatiotemporalaspects, and propensities as weighted modalities.Yet there still remains a worry aboutbearers of powers/dispositions. Typically this role is played by enduring objects. Couldthen powers be powers of flashes?

Dean Peters. Against the “working posits” version of selective realismAbstract: Many contemporary scientific realists defend “selective realism”. This statesthat, if some theory is empirically successful and some element of this theory is essentialfor that success, then that this element (probably) accurately describes a correspondingfeature of the world. This view, by implying theoretical continuity in these essentialelements, counters critics of realism (e.g. Laudan, 1981) who point to “radicaldiscontinuity” in the history of science. Despite important differences, Kitcher (1993) andPsillos (1999) broadly agree that the essential propositions – i.e. the “working posits”(WPs) – of a successful theory are those that “fuel” the actual derivations of successfulpredictions. In this paper, I offer the first systematic examination of this view, surveyingno fewer than six interpretations of it. I argue that none is satisfactory, andconsequently suggest an alternative positive view.In a critical mode, Stanford (2006) argues that the WPs are implicitly just those which inretrospect have been retained despite theoretical change (1). Such a retrospectiveinterpretation is indeed unsatisfactory, but I claim that a prospective account is alsopossible. Also in a critical mode, Chang (2003) suggests that the WPs are those whichare causally involved a successful derivation (2). There are, however, obviouscounterexamples to this interpretation: posits that are obviously false might guide ascientist towards a successful theory. Some logical interpretation is required.Psillos argues that the WPs are “... those theoretical constituents which scientiststhemselves believed to contribute to the successes of their theories.” (3) I respond that,firstly, scientists’ judgements are fallible and, secondly, it is intellectually incurious – weare interested in the logical criteria by which such judgements are made, not simply thefact that they are made. A related suggestion is that the WPs are those formally invokedin the derivation of a result (4). I respond that a distinction between formal and informalreasoning is, firstly, untenable and, secondly, epistemically irrelevant for currentpurposes.Psillos also suggests that a working posit H is that which cannot be eliminated orreplaced by some other available posit and leave the successful derivation intact.“Available” is often taken to describe a logical consequence of H (5). But thisinterpretation results in absurdity, as any posit is eliminable in favour of some“downstream” proposition, including the successful empirical result itself. Vickers(forthcoming) attempts to remedy this by designating as essential those posits which are(i) involved in a formal derivation; and (ii) the logical consequence of several “upstream”posits (6). This emphasis on logical “confluence points” is appealing, as intuitively this iswhere the “work” happens in a derivation.I argue, however, that this interpretation simply emphasises a major flaw in the entireapproach, namely that it focuses on particular successful derivations rather than thegeneral empirical success of a theory. If selective realism is to explain this success, itmust designate as essential those posits involved in multiple successful derivations.Hence the essential elements of a theory the logical divergence, not confluence, points,i.e. those that ‘unify’ empirical phenomena.Ivan Gonzalez-Cabrera. Bonobos as model of the last common ancestor ofhumans and apes: the neglected discussion in the evolution of human cognitionAbstract: Discussions about the evolution of human cognition usually portray the lastcommon ancestor of humans and apes as a chimpanzee-like hominid. This has long beenthe prevailing view in both the philosophical and biological literature. Such a view hasbeen strongly championed by Richard Wrangham and his colleagues, although it has alsobeen challenged by others researchers—most notoriously Adrienne Zihlman and Frans deWaal. For those researchers the bonobo is, at least in some important respects, a

suitable model of our last common ancestor. Chimpanzees and bonobos are both ourclosest living relatives, and we share with them about 98% of our genome. Morerecently, the sequencing of the bonobo genome has shown that 1.6% of the humangenome is more closely related to the bonobo than to the chimpanzee genome, and that1.7% of the human genome is more closely related to the chimpanzee than to thebonobo genome. This means that in principle our last common ancestor could havepossessed a mosaic of traits seen in both Pan species. An alternative evolutionaryscenario is given by the self-domestication hypothesis according to which the observeddifferences between both species are due to selection against aggression in the bonobofrom a chimpanzee-like common ancestor. Both scenarios have been typicallyunderstood as rival and mutually exclusive hypothesis. However, in this paper I willargue for an alternative picture of the hominid evolution in which self-domesticationprocesses might have played an important role in the evolution of the human linage byaffecting a common ancestor who was, in some important respects, more bonobo-likethan chimpanzee-like. If that result is correct, I will argue, many evolutionary scenariosthat have been provided for the evolution of human cognition would be either correct buttoo general to explain the relevant cognitive mechanisms or fairly specific in theirevolutionary narrative but plainly false. For that reason in the final part of the paper Iwill discuss how this view could affect ongoing debates on the evolution of humancognition—particularly, recent discussions about the evolution of what sometimes hasbeen called ‘human moral cognition’.Alexander Reutlinger. Three Objections to the Open Systems ArgumentAbstract: In ‘On the Notion of Cause’, Bertrand Russell famously argues that it is animportant lesson of fundamental physics that – contrary to the beliefs of philosophers –causation is not among the building blocks of the world. That is, causal relations are notpart of the ontology of fundamental physics. This is the orthodox Russellian claim.Recently, several Neo-Russellian philosophers have expressed agreement with Russell’sview of the ontology of fundamental physics. Agreeing with Russell on the truth of theorthodox Russellian claim, Neo-Russellians argue for – what I call – the additional Neo-Russellian claim that we have good reasons to believe in the existence of nonfundamental,higher-level causal facts (cf. Eagle 2007, Hitchcock 2007, Kutach 2007,Ladyman and Ross 2007, Loewer 2007, 2009, Ross and Spurrett 2007, and Woodward2007). In the context of philosophy of science, the Neo-Russellian claim is primarilywarranted by the observation that higher-level causes loom large in the special sciences.Usually a third widely held claim is added to the Neo-Russellian account: the dependenceclaim, according to which higher-level causal facts metaphysically depend on (acausal)fundamental physical facts. That is, a Neo-Russellian believes that the conjunction of theorthodox Russellian claim, Neo-Russellian claim and the dependence claim is true.The main puzzle that Neo-Russellians wish to solve is how one can explain that theorthodox Russellian claim, the Neo-Russellian claim and the dependence claim are alltrue in the actual world. I will refer to this request for an explanation as the Neo-Russellian challenge. If such an explanation of why these three claims are true can beprovided, then higher-level causal facts are physically kosher facts and the Neo-Russellian challenge is met.The central question of this talk is whether proponents of an interventionist theory ofcausation can meet the Neo-Russellian challenge – a goal some interventionists explicitlywish to achieve (Eagle 2007, Hitchcock 2007, Woodward 2007). The main argument, towhich interventionist Neo-Russellians refer for this purpose, is the ‘Open SystemsArgument’. The basic idea of the open systems argument draws on the interventionisttheory of causation as follows:(1) causal relations are not part of the ontology of fundamental physics because it isimpossible to intervene on the systems described by fundamental physics (that is, theinterventionist theory does not apply);

(2) higher-level causal facts obtain since it is possible to intervene on those systems thatare described by the special sciences.According to proponents of the open systems argument, the possibility to intervene on asystem depends on whether the system is open or closed. A system is open if it has an‘environment’; otherwise it is closed.In my talk, it will be argued that the open systems argument is not sound. I will presentthree objections to the open systems argument in order to establish this claim.Ken Wharton. Lagrangian-Only Quantum TheoryAbstract: This talk will summarize the conceptual framework behind a newly-proposedspacetime-realist account of quantum phenomena ( Thestarting point is the Feynman path integral (FPI), a useful probability-generating toolthat unfortunately allows no realistic interpretation. Altering the FPI by restricting the"sum over histories" in a simple manner (constraining the Lagrangian density to be zero)allows one to assign equal a priori probabilities to each possible history. The expense isthat there is no longer a Hamiltonian description of the dynamics (or indeed any set ofdynamical equations necessarily obeyed by the system). But for spin measurements, atleast, the results seem to explain known quantum phenomena: one particular history isontic, while epistemic states naturally live in a configuration space over possiblehistories.Advantages to this framework include restoring spacetime to the ontology, utilizingclassical probability theory, and allowing a principle-based derivation of the Born rule.Notably, since the Lagrangian is subject to future boundary constraints, exploiting the"retrocausal loophole" permits a continuous, spacetime-based, hidden-variabledescription of a Bell-inequality-violating system.Rachael Brown. Learning and the evolution of EvolvabilityAbstract: During the development of vertebrate embryos large numbers of axons(significantly more than is ultimately required) grow out of the central nervous systemtowards the extremities. The actual path these many nerves take as they grow israndom. They wind their way down the limbs and into the digits in a meanderingfashion. Where a nerve by chance hits muscle or organ tissue, a stabilising protein isproduced that encourages it to persist. The majority of the nerves generated are lessfortunate, however, and exist only fleetingly. While they too grow forth into the limbs,they fail to happen on muscle or organ. In the consequent absence of the stabilisingprotein that would be generated if they collided with muscle or organ, they shrink backinto the nervous system rather than being maintained.This two-step process of “variation” followed by “selection” in limb neural development isa source of great power; it allows the system to explore or search the local space ofphenotypic possibilities and stabilise on the most suitable given the internalenvironment. Limb neural development is not alone in exhibiting this type of behaviour;other morphological systems such as the growth of the mitotic spindles display a similarpattern of “variation” followed by “selection.” These patterns of development are knownwithin evolutionary developmental biology as “exploratory behaviours.”The presence of such “exploratory behaviours” in the individuals making up populationsis a recognised source of morphological evolvability. Accounting for evolvability is a keyfocus of research in evolutionary developmental biology (or Evo-devo). In this paper Icontribute to this project by considering how exploratory behaviours evolve.I first draw some analogies between trial-and-error learning and exploratory behaviour

in order to motivate thinking about exploratory behaviour as a learning system. In bothexploratory behaviour and trial-and-error learning the system modifies itself in responseto feedback from the internal or external environment using a process of “variation”followed by “selection”. These processes not only increase the viability of individuals inmaking aspects of their phenotype more suited to their internal or externalenvironments, but also make it easier for the populations that these individuals aremembers of to persist over time. This in of itself increases the evolvability of thepopulations (extinct populations cannot evolve); it also increases evolvability by allowingthe accumulation of hidden variation.I then use the existing literature on the evolution of learning to consider the enablingconditions for the evolution of exploratory behaviour. In particular, we know that fortrial-and-error learning to be beneficial, the environment must also have a certain costbenefitstructure. The costs of error in trial-and-error learning cannot be so great thatthey outweigh the benefits offered by behavioural flexibility. I argue that similar enablingconditions exist for exploratory behaviour. More specifically, exploratory behaviourrequires a certain type of epistemic structure in the internal environment; the costs ofexploration cannot outweigh the benefits offered by plasticity.Melinda Fagan. Stem Cell Pluralism: Resolving the ‘Stemness’ DebateAbstract: A number of scientists and philosophers have argued that the concept of‘stem cell’ should be replaced by another notion: ‘stemness’ (Lander 2009, Laplaneforthcoming, Leychkis et al 2009, Robert 2004, Wolkenhauer et al 2011, Zipori 2005).These proposals, and arguments for and against, comprise the “state vs. entity debate”in stem cell research. On the ‘entity’ view, stem cells are a kind of cell that can beisolated and characterized for therapeutic use. The ‘state’ view, in contrast, holds thatstemness is a relational property of interacting cells and (on some interpretations) theirenvironment. In this paper, I argue that the state vs. entity debate rests on amisunderstanding of the stem cell concept. Properly understood, the prevailing stem cellconcept is compatible with both the state view and a modified version of the entity view.I first contrast the two alternatives. The entity view consists of three distinct theses: (i)all and only those cells capable of self-renewal (producing more cells like the parent) anddifferentiation (acquiring specialized traits) are stem cells, (ii) cell behaviors andcapacities are explained by molecular properties of a cell, and (iii) cell development isorderly and irreversible. On the entity view, a stem cell is a cellular entity with stable,intrinsic properties that distinguish it from other kinds of cell (e.g., neurons, white bloodcells, etc.). The state view, in contrast, begins with the concept of ‘cell state:’ afunctional role that any cell may in principle occupy. The stem state (‘stemness’) isdefined as a state from which many other cell states can be reached via differentiation.Self-renewal is not required; there is no molecular ‘stem cell signature;’ anddevelopment is in principle reversible.After setting out the two alternatives, I consider arguments for and against each.Though the state view has some evidential support, aspects of the entity view are deeplyentrenched in stem cell biology (Fortunel et al 2003, Takahashi and Yamanaka 2006). Iargue that the debate rests on a flawed understanding of the stem cell concept. I havepreviously argued for a ‘minimal’ model of stem cells, based on the prevailing definition(Fagan 2013). This model structurally defines a stem cell as the unique origin (or stem)of the lineage L defined by time interval n, characters C and mature cell characters M.For the model to apply to biological phenomena, these parameters must be specified.Different values of these variables (specified by experimental methods) correspond todifferent entities that can be identified as ‘stem cells.’ The stem cell concept is thusrelational (implicating a cell lineage) and relative (to experimental methods). Thispluralist account resolves the state vs. entity debate while retaining the prevailing

definition of ‘stem cell.’ I conclude with some additional considerations in favor of ‘stemcell pluralism,’ and discuss some broader implications of this result.Oliver Pooley. Against a gravity--inertia splitAbstract: Whilst textbook othodoxy maintains that gravity is reduced to spacetimecurvature, a number of authors (e.g., Stachel, Janssen) defend the claim that, in generalrelativity, the connection represents a unified "inertio-gravitational" field, and defend thepropriety of talk of a coordinate-dependent gravitational field.To make sense of such talk, one needs to relate the theory to Newtonian gravity, and torecognise that two routes to privileged frames of reference need not yield the same setsof frames. On the first route, which paths in spacetime correspond to unaccelerated("inertial") motions is an absolute, coordinate-independent matter. The privileged framesare those whose standard of rest corresponds to inertial motion. On the second route,privileged frames are identified via classes of co-moving coordinate systems with respectto which the dynamical equations take a simple, canonical form.In Newtonian gravity, the second route yields globally-defined frames with respect towhich freely-falling bodies are (in general) accelerating. In practice, however, the theorycannot distinguish between frames that are relatively translationally accelerated. At best,therefore, an empirically undetectable proper subset of these frames encode inertialmotion. The idea of a frame-dependent inertia--gravity split arises when one combinesthe idea that these frames encode inertia (and thus that free-fall motions involvegravitational deflection from inertial motion) with the idea that they are fundamentallyphysically equivalent. This combination, however, is not coherent. A preferable viewpointreconciles an absolute notion of inertia with the physical equivalence of the framesidentified via the second route by denying that they encode inertial motion. They are,instead, frames with respect to which the components of the connection take aparticularly simple form, even though they do not all vanish.This paper constitutes joint work with Dennis Lehmkuhl, who has submitted a relatedabstract on another part of our project.Bert Leuridan, Erik Weber and Inge De Bal. Interventionism, Policy andVarieties of Evidence for Causal ClaimsAbstract: In his book Making Things Happen, Jim Woodward offers an interventionisttheory of causation. Simplifying matters: according to his account causal relations arerelations which are invariant under (some range of) interventions.One way to gather evidence for or against a causal claim is to perform a randomisedexperiment. In many scientific disciplines – such as econometrics, the social sciencesand epidemiology – other, non-experimental methods are used as well.Several authors have criticized Woodward’s account of causation. Some of them claim orsuggest that he cannot account for the use of non-experimental evidence for causalclaims (Federica Russo, Julian Reiss). Others say that he ties his theory of causality tooclosely to experimentation (Nancy Cartwright). If these critics are correct, Woodwardfaces an enormous problem.In this paper, I argue that the interventionist theory of causation can account for the useof non-experimental evidence, but that a sharp distinction must be made between twopossible contexts in which Woodwardian interventions (or manipulations more generally)can occur: scientific experiments on the one hand and policy actions on the other hand.More specifically, I will distinguish between ideal/non-ideal and real/hypotheticalexperiments and policy actions.

The meaning of causal claims is linked both to the outcomes of scientific experiments(real or hypothetical) and to the outcomes of policy actions (again real or hypothetical).But only the latter link constitutes their relevant content.Starting from these ideas, I will argue:- that some of Woodward’s formulations of the interventionist account misleadingly focuson the non-relevant content of causal claims (thus showing that his critics do have apoint);- that, how, and to what extent, Woodward’s interventionist theory of causation canaccount for the use of purely observational evidence for causal claims; and- that a similar story can (and needs to be) told about the use of experimental evidence(from a model population or from the target population) for causal claims (this point hasnot been touched in sufficient detail by either Woodward or his critics yet).In short, Woodward’s interventionist theory of causation can account for varieties forevidence for causal claims – pace Russo, Reiss and Cartwright.Note: This is joint work.References:Cartwright N. (2007), Hunting Causes and Using Them. Approaches in Philosophy andEconomics. Cambridge: Cambridge University Press.Russo F. (2011), “Correlational Data, Causal Hypotheses, and Validity”, Journal forGeneral Philosophy of Science 42, 85-107.Reiss J. (2012), “Causation in the sciences: An inferentialist account”, Studies in Historyand Philosophy of Biological and Biomedical Sciences 43(4), 769-777.Woodward J. (2003), Making Things Happen. A Theory of Causal Explanation. Oxford:Oxford University Press.Tobias Henschen. A Subjectivist Variant of a Structural Account ofMacroeconomic CausationAbstract: If we look for a definition of the notion of causation in macroeconomics, thenthe so-called interventionist account of causation appears to be the account to turn to.The interventionist account not only appears convincing as a general account ofcausation: it nicely captures an intuition that we have when thinking of X as a cause of Y(the intuition that intervening on X will lead to a change in the value of Y), and it scoreswell with respect to problems that are well known in the literature of causation(problems relating to phenomena like conjunctive forks, preemption, over-determinationand trumping). Proponents of the interventionist account also claim to remain true to thespirit of the early econometricians when stating that interventions turn mere regressionequations into structural equations, and that structural equations can always be given acausal interpretation. Important objections that have been raised to the interventionistaccount include charges of anthropomorphism and circularity. But proponents of thisaccount have rejected these charges by pointing out that the notion of intervention isbroad enough to include non-human and ideal interventions, and that the interventionistaccount is meant to provide a non-reductive analysis of causation.The aim of the paper is threefold. Its first aim is to reject the circularity charge withoutgiving up a reductive analysis. The non-circular definition that it presents dispenses withthe notion of intervention and relies on the notion of structural model instead. Its secondaim is to raise an objection to the objectivist attitude that becomes most explicit ininterventionist talk of stable and autonomous mechanisms. This objection states thatempirical evidence of the existence of these mechanisms is unavailable tomacroeconomists. And it relies on the observation that the notion of exogeneity that isneeded to characterize structural equations is the traditional error-based notion of

exogeneity, and that empirical evidence to error-based exogeneity is unavailable toeconometricians (there is no way to test whether an (instrumental) variable isuncorrelated with an error).The third aim of the paper is to suggest a subjectivist variant of a structural account ofmacroeconomic causation: a variant involving a (“Humean projection”-like) two-stepprocedure of first relativizing the variables and parameters that figure in a structuralmodel to the beliefs of an epistemic subject and of then objectifying these beliefs as faras possible. The result of the first step is a definition that is syntactically andsemantically identical with the non-circular and structural-model-based definition, exceptthat the parameters that figure in this definition denote degrees of subjective beliefabout the strength of associations. The result of the second step is that the subjectivebeliefs about this strength lose important components when being objectified.Objectifications of these beliefs amount to little more than the time series that providevalues to the variables figuring in structural models. And these time series reflect neitherexogeneity nor any of the other properties needed to characterize structural equationmodels.Michela Massimi. Perspectival RealismAbstract: In this paper, I take the cue from Ron Giere’s (2006, 2009) recent defense ofscientific perspectivism and Paul Teller’s recent work along similar lines (2011, 2013) asa middle ground between relativism and objectivist realism. Giere’s perspectivism startsfrom the secondary quality of color vision and its analogy with scientific observation. Thefinal result is a thoroughgoing perspectivalist view about models and theories, wherebytruth claims become relative to a perspective. I defend a different variant of perspectivalrealism as a genuine middle ground between traditional realism and relativism. Thepaper has two main goals. First, I show the inadequacy of scientific perspectivism ifunderstood as a form of truth-relativism, by highlighting some undesirable features thatensue from making truth claims relative to scientific perspectives. Second, I defend analternative view that takes as perspectival the justification for our knowledge claims, butnot the truth-conditions for those same claims. I contend that if understood in this way,perspectivalism can do justice to the historically contingent and interest-relative natureof our scientific claims no less than to some key realist intuitions about how sciencetracks nature.Robert Northcott. Opinion polling: a philosophical analysisAbstract: I argue for two theses: 1) that opinion polling is (sometimes) an example ofunusually successful social science; and 2) that the details of this success shed light onissues in the recent literature on scientific modeling.1) Opinion polling seeks not just to measure a population’s preferences but also topredict that population’s future behavior. As is well known, some aggregators of manydifferent opinion polls have had great success at one example of this, namely predictingelection results. Famously, several correctly predicted the result of the 2012 USpresidential election in all 50 states as well as the overall vote share to within a fewtenths of a per cent.2) Such quantitative predictive success is rare in social science. What philosophicallessons does it hold?There are many alternative methods for predicting election results. These range from theformal, such as regressions of various socioeconomic variables against past electiondata, to the informal, such as insider campaign reportage. But none of these othermethods had anything like the success of the best opinion poll aggregators. (Plus, muchof the contemporary media coverage also proved inaccurate.) Further, some aggregators

did markedly better than did others. This suggests that success here was non-trivial anddeserves closer study.The details here revolved around issues such as: the adjustment of raw survey data inlight of demographic assumptions; the correlation of errors between different polls andbetween different states; how to resolve discordance between state polls and nationalpolls; different weights to assign different polls when aggregating, and more generallythe extent of any value that can be added by sophisticated procedures that go beyond acrude mere averaging.An important general theme in the handling of all these issues was the role of theory –or rather the relative lack of it. Each issue required much trial-and-error investigationagainst carefully chosen data. Generalized theoretical knowledge on its own provedwholly insufficient. This is supported by the relative failure of some of these same pollaggregators in other elections, such as the 2010 UK general election or US midtermelections. The difficulty of extrapolating the same predictive methods to new contextsunderscores the highly local nature of the relevant expertise.Overall, the picture that emerges emphasizes the role of informal, context-specific ‘ruleof thumb’ knowledge in successful scientific modeling. This dovetails with an influentialrecent strand in the literature (e.g. Cartwright 2006, 2010, Alexandrova 2008), andindeed provides a new case study that both supports and refines it.Dennis Lehmkuhl. Einstein on the nature of the gravity/inertia splitAbstract: I argue that, contrary to folklore, Einstein never really cared for geometrizingthe gravitational or (subsequently) the electromagnetic field; indeed, he thought that thevery statement of `geometrization' was “meaningless”. I will show that this wasEinstein’s opinion even though he was in full command of the mathematical andconceptual tools that would have enabled him to adopt the modern viewpoint thatbecause test particles under the influence of gravity move on geodesics, gravity shouldbe seen as reduced to inertia/geometry. Indeed, I will show that at the latest by 1916, adebate with Friedrich Kottler enabled Einstein to see gravity as reduced toinertia/geometry in exactly the way advocated by modern textbooks, 2 years before theaffine connection was given its modern geometrical definition by Levi-Civita and Weyl.However, Einstein consciously decided not to take this stance, neither before nor afterthe developments of Levi-Civita and Weyl. Instead, he interpreted the fact that accordingto General Relativity particles move on geodesics even when subject to gravity asshowing a.) that gravity and inertia are unified in a way directly analogous to theunification of electric and magnetic fields in special relativity: b.) that attributing thepresence of a gravitational field (rather than a unified gravitational-inertial field) ispossible only relative to a chosen coordinate system. Thus, Einstein thought of the `split’between gravity and inertia as something in principle “unnecessary”. However, he alsoinsisted that labeling the terms in the coordinate-dependent decomposition of thegeodesic equation as “inertial” and “gravitational”, respectively, was useful whencomparing General Relativity to its predecessor theories. After reconstructing Einstein’sstance with respect to the gravity/inertia split (or the absence thereof), I will commenton whether the idea that the split is primarily a tool for comparing General Relativity toother spacetime theories, rather than a feature inherent to GR, might be useful for amodern interpretation of GR and its comparison with other spacetime theories. Thispaper constitutes part of a joint project with Oliver Pooley, who has submitted a relatedabstract on another part of our project.

Alexandre Marcellesi. How far can invariance take us on the way to a causalexplanation? Not very far.Abstract: According to Woodward’s interventionist account, it is invariance, notlawfulness, that is the mark of explanatory generalizations. Interventionism thus has amajor advantage over e.g. the Deductive-Nomological account since it does not yield theparadoxical conclusion that there are no genuine explanations in the sciences that lackgenuine laws (ecology, sociology, etc.). I here argue that invariance is not sufficient tomark off explanatory generalizations from the rest, and so that Woodward has /at best/identified a necessary condition for causal explanation.On Woodward’s view, a generalization M: Y = f(X) causally explains an event of the formY = y if and only if (1) it is the case that y = f(x), where x and y represent the actualvalues of X and Y, respectively, and (2) M would be invariant, i.e. remain true, under atleast one possible testing intervention on the value of X. A testing intervention on X withrespect to Y is a manipulation of the value of X which has an effect on Y, if at all, only viaits effect on X (this makes this manipulation of X an intervention) and which sets X tosome non-actual value and is followed by Y also taking some non-actual value (thismakes this manipulation of X a /testing/ intervention).Consider two continuous variables X and Y and a generalization M: Y = f(X) representingthe relationship between them. Assume (i) that the actual values of X and Y are x and y,(ii) that y = f(x), (iii) that when intervention I sets X = x’, then Y = y’, and (iv) that y’ =f(x’). It follows from these four assumptions that M satisfies conditions (1)-(2) and socausally explains event Y = y. Now, a generalization N: Y = g(X) that is such that g(x) =f(x) and g(x’) = f(x’) also satisfies these conditions, and so causally explains Y = y justas much as M does. The problem is that there are infinitely many generalizations which,like N, coincide with M in points (x, y) and (x’, y’). Because M is univariate, you can thinkof it as a curve in a plane. It then becomes obvious that infinitely many curves mayintersect M in points (x, y) and (x’, y’).Interventionism thus implies that every event is causally explained by infinitely manygeneralizations, which conflicts with the fact that scientists do not consider infinitelymany generalizations when building causal explanations. This discrepancy suggests thatinterventionism fails to account for /at least/ one important feature of generalizationsthat scientists do take into account when deciding which generalizations are causallyexplanatory and which are not.In the remainder of the paper I rebut three objections claiming respectively (a) thatthese infinitely many generalizations are not, from the point of view of causalexplanation, genuinely distinct, (b) that one can appeal to the interventionist account ofexplanatory depth to distinguish among those infinitely many generalizations, and (c)that appeals to virtues like simplicity can help interventionists distinguish among theseinfinitely many generalizations.Arianne Shahvisi. Particles do not conspireAbstract: It has been suggested (e.g. Fodor, 2008) that granting lawhood status tospecial-science generalisations (SSGs) implies elaborate conspiracies amongstfundamental particles, whose behaviour at the micro-level is mysteriously coordinated tomake SSGs projectible: the 'microscopic conspiracy' problem (MC). This paper willcritically assess two theories of special-science laws: (a) Albert (2000) and Loewer's(e.g. 2008) (AL) theory, which has SSGs following as probabilistic entailments from thefundamental laws coupled with the Past Hypothesis and the Statistical Postulate, and (b)Callender and Cohen's (2010) 'Better Best System' (BBS) theory, which relativises theMill-Ramsey-Lewis simplicity-strength metric so that SSGs compete within their ownexplanatory domains. I will defend the AL theory against Callender and Cohen's

critcisms, but will ultimately find that the optimal non-conspiratorial theory of lawhood isa version of BBS that considers the way in which the origins of macroscopic subsystemsrestricts their later behaviour. I call this the 'Subsystem Genealogy' amendment, andpropose that it closes vital explanatory lacunae in the otherwise powerful BBS theory.Albert, D. Z., (2000), Time and Chance, Harvard University Press.Callender, C. & Cohen, J. (2010) 'Special Sciences, Conspiracy and the Better BestSystem Account of Laws', Erkenntnis, 73: 427-447.Fodor, J. (1998). 'Special Sciences; still autonomous after all these years', PhilosophicalPerspectives, 11, 149-163.Loewer, B. (2008). Why There is anything except physics. In Being reduced: new essayson reduction, explanation, and causation (eds H. Jakob & K. Jesper). New York, NY:Oxford University Press.Ioannis Votsis. The Scientific MethodAbstract: In this talk, I argue, contrary to popular belief, that there is such a thing asthe scientific method and that we already possess some of its principles or at leastapproximate versions of them. The popularity of the opposite view can be traced back tothe fact that most attempts to identify the scientific method involve an overly strongconception and are therefore bound to fail. I propose a weaker conception, one thatmaintains that there is core methodology shared across all domains of inquiry while atthe same time allows for variation on the periphery.Several attempts have been made over the years to uncover the one true scientificmethod. They include inductivism, hypothetico-deductivism, falsificationism, (objectiveand subjective) Bayesianism, abductivism, etc. Each of these views has been subjectedto criticism. Among the main objections has been the claim that a candidate scientificmethod does not, and even cannot, do justice to what goes on in the context ofdiscovery and/or the context of justification. The end result of all the different objectionshas been the emergence of a widespread pessimism over the existence of such a thingcalled 'the scientific method'. This pessimism is perhaps best reflected in two (otherwisevery different) works, namely Feyerabend (1975) and Laudan (1984).Worrall (1988) already provides some hope for optimism. He argues against Laudan thatin order to coherently explain progress in science one must assume that somemethodological principles remain fixed despite the occurrence of scientific revolutions. Iprovide a generalisation of Worrall's arguments to show that convergence ofmethodological principles does not only take place within a specific domain of inquiry butalso across domains. Given the varied nature of the domains involved, not everymethodological principle utilised in a specific domain will be found in all the otherdomains. Only some of them will have this cross-domain convergent character. Ahandful of candidates are discussed in the talk. One such example is the principle ofreproducibility: Other things being equal, hypothesis generation and testing should besuch that sufficiently similar hypotheses and sufficiently similar test results should bereproducible under sufficiently similar conditions.Cross-domain convergence is not sufficient to establish that the said principles form partof the one true scientific method. It is also important to argue, as Laudan (1989) insistsbut Worrall (1989) resists, that any methodological principles we converge upon arejustified. The spectre of regress looms if we expect them to be justified by furtherprinciples. To avoid it, I argue that the aims and goals of scientific inquiry are such thatonly certain methodological principles can help bring them about, namely the convergentones.References:Feyerabend, P.K. (1975) Against Method, London: Verso.

Laudan, L. (1984) Science and Values, Berkeley: University of California Press.---------- (1989) 'If It Ain't Broke, Don't Fix It', BJPS, vol. 40(3): 369-375.Worrall, J. (1988) 'The Value of a Fixed Methodology', BJPS, vol. 39(2): 263-275.---------- (1989) 'Fix it and be Damned', BJPS, vol. 40(3): 376-388.Flavia Padovani. A Logical Space for MeasurementAbstract: Scientific theories typically consist of principles, laws, and equations, whichinclude specific, already sufficiently interpreted theoretical terms, such as velocity,pressure, time, temperature, etc. In most cases, in the history of science, theintroduction of these parameters is parallel to the development of the theory in whichthey occur. These are not, in fact, pre-existing quantities, and their individuation asparameters often goes together with the creation of the corresponding measurementprocedures. Besides, measurement procedures are grounded in, and depend on, a preconstitutedconceptual framework, which they help to forge.In line with recent literature on scientific representation, we can consider measuring as“representing”, since a measurement pinpoints the target in an already-constructedtheoretical space. To put it with Bas van Fraassen, and use the Wittgensteinian conceptof “logical space”, we can say that “the act of measurement is an act-performed inaccordance with certain operational rules—of locating an item in a logical space”, that is,“an ordered space of possible measurement outcomes”. (Scientific Representation(2008), p. 164) In other words, a logical space is a mathematical construct required inorder to represent certain conceptual interconnections, and which provides the range ofpossible features pertaining to the items described in the domain and in the language ofthat theory. The objects of representation are located in this space of pre-orderedpossibilities. So, there is a sense in which the activity of measuring means framing, i.e.,“constituting” the measured quantities, thus allowing for the coordination of abstract,mathematical quantities to “pieces of reality”.Originally reinterpreting Ernst Cassirer’s proposal, Hans Reichenbach has been amongthe first ones to seriously tackle the issue of coordination. In his early works, he putforward an account of coordinating (i.e., constitutive) principles of science in which theissue of measurement was pivotal. In that view, constitutive principles are revisable,theory-relative preconditions of knowledge, supplying the mathematical machinery of atheory with empirical interpretation. In the past two decades, one of the leadinginterpretations of such principles has been proposed by Michael Friedman, along the linetraced by Reichenbach. However, in Friedman’s account the constitutive function ofmeasurement is completely neglected whereas, I shall argue, this function is crucial.Measuring has to be considered as constitutive of the representation of physical(measured) objects, and it is actually the first, fundamental level in which an act of“constitution” takes place in scientific representation.The aim of this paper is to suggest an interpretation of the practice of measuring thatcombines the concept of “logical space” with Friedman’s “relativised a priori”. This allowsfor a more “liberalised”, dynamic and pragmatical account of the principles operating inthe scientific practice -- a dimension which does not get much attention in Friedman’swork.Marion Godman, Mikko Salmela and Michiru Nagatsu. Three Roles for SocialMotivation in Joint ActionAbstract: The standard account of joint action and human cooperation has it that suchaction is principally facilitated by relatively cognitively demanding shared intentions andcommon knowledge (e.g. Bratman; Searle; Gilbert; Tomasello). In recent years anotherhypothesis has attracted an increasing number of economists, developmentalpsychologists and philosophers, which is that a many of our social interactions may not

principally driven by – or at least not merely by – agents intentions to achieve jointgoals, but by socially motivations to engage in joint action. In other words human theyfind acting with others pleasurable or rewarding in its own right (e.g. Sugden;Chartrand; Nielsen; Chevalier et al). After considering some of the experimental andempirical work that seems to support this hypothesis, we ask: if humans often engage insocial activities simply for their own sake without the guidance of an intention toward ashared goal, what is the phylogenetic and ontogenetic role for this social motivation injoint action? We suggest that there are three compelling answers to this question whichsupports a gene-culture co-evolution of social motivations and emotions: 1) Helping tofacilitate the coordination and execution of things that might only be done by groups, oris more efficiently performed by groups (i.e. beneficial to both groups and individuals);2) Enhancing group formation and maintain a sense of group cohesion and conformitythat enables the faithful cultural transmission of other traits (i.e. mainly beneficial togroups); 3) Assisting the formation and maintenance of social bonds with kin andoutside kin relations (i.e. mainly beneficial to individual fitness). While (1) can be seenas complementing the role of shared intentions in joint action, (2) and (3) rather suggestthat shared intentions are explanatorily redundant in many social interactions, such asthose that are of a more explorative character.J. Brian Pitts. How Space-time Theory Is Illuminated by Particle Physics: TheNeglected Case of Massive Scalar GravityAbstract: Both 1920s-30s particle physics and the 1890s Seeliger-Neumannmodification of Newtonian gravity suggest an algebraic “mass term” for gravity. It givesthe graviton a mass and hence a finite range. The smooth massless limit indicatespermanent underdetermination.In 1914 Nordström generalized Newtonian gravity to fit Special Relativity. Why not do toNordström what Seeliger-Neumann did to Newton? Einstein started to do so in 1917---toset up a (faulty!) analogy for his cosmological constant Λ. Free relativistic massive scalargravity satisfies the Klein-Gordon equation. Scalar gravity isn't empirically viable sincethe 1919 bending of light, but provides a useful test bed. Wigner classified relativisticfields using spin (tensor rank) and mass; any spin can be massless or massive---e.g.,electromagnetism (Proca’s massive spin 1). Massive scalar gravity was completedpostmaturely, not before around 1970. Massive spin 2 gravity would have blockedSchlick’s refutation of Kant’s synthetic a priori.Massive scalar gravity illuminates most issues in space-time theory. A mass term shrinksthe symmetry group from the 15-parameter conformal group to that of SpecialRelativity. Gravity did not have to burst Special Relativity. Massive scalar gravity violatesEinstein's principles (general covariance, general relativity, equivalence and Mach) inempirically small but conceptually large ways. Geometry is a poor guide: matter sees aconformally flat metric because gravity distorts volumes while leaving the speed of lightalone, but gravity sees the whole flat metric due to the mass term. The same geometricingredients yield uncountably many theories differing in nonlinear gravitational selfinteraction.The dynamics explains the geometry, in accord with Brown’s dynamicalapproach to space-time geometry, but not vice versa. The space-time realism of Nortonet al. either fails to discern whether a Poincaré-invariant field theory is special relativisticor contradicts the empirical content of the matter field equations. With Poincaré (paceEddington's geometric empiricism), one can discuss a “true” flat geometry (observable,barely, via the gravitational dynamics) not seen by material rods and clocks. Butquestions about “true” geometry need no answer and block inquiry. The critique ofconventionalism via the Ehlers-Pirani-Schild construction, by neglecting the gravitationalfield equation, fails to notice the two physically relevant notions of geodesic.Both technically and conceptually, spin 0 (scalar) gravity is a good warm-up exercise forspin 2. In 1939 Pauli and Fierz noticed that Einstein’s theory was (neglecting

interactions) massless spin 2; why not try massive spin 2? In 1970-72, an apparentlyfatal dilemma involving either instability or empirical falsification (van Dam-Veltman-Zakharov discontinuity) arose for massive spin 2 gravity, finally making GR uniquelyplausible. But dark energy measurements since 1999 cast doubt on GR at long distances.Recent calculations (some from 2010) show that instability can be avoided for massivespin 2 and that empirical falsification likely can be also. Thus massive spin 2 vs. GR is aserious case of underdetermination. Particle physics allows one to proportion belief toevidence, sensitive over time to new results, rather than suffering from unconceivedalternatives.Christian J. Feldbacher. Diversity, Meta-Induction, and the Wisdom of theCrowdAbstract: It can be shown that some meta-inductive methods are optimal compared tocompeting methods inasmuch as they are in the long run the most successful methods ina prediction setting (cf. especially Schurz 2008). Meta-inductive methods build theirpredictions on competing methods, depending on their past success. Since they dependon other methods, they normally decrease the diversity or independence within asetting. However, some very important results of social epistemology show that diversityin a setting is highly relevant for the whole performance within the setting which is theso-called "influence of diversity on the wisdom of a crowd", where one may observe thata group's averaged estimation on an outcome is more accurate than the averageindividual estimation due to diversity within the group.So, at first glance it seems that meta-inductive methods are valuable for their own sake,but not for the sake of a whole group of methods' performance. For this reason PaulThorn and Gerhard Schurz investigated recently the influence of meta-inductive methodson the performance of a group in more detail. Since there are no general results aboutthis influence in a broad setting, they performed simulations for quite specific settings.The main result of their argumentation and simulations is that "it is not generallyrecommendable to replace independent strategies by meta-inductive ones, but only toenrich them" (cf. Thorn & Schurz 2012).In this paper a complementary summary of the mentioned investigation on metainductionand the wisdom of the crowd effect is provided. In especially it is shown that,whereas meta-inductive methods allow one to account for the traditional problem ofinduction by making a step to a meta level, investigations of social epistemology, whichmake a similar step to a meta level by using a wisdom of the crowd effect, are able toaccount similarly for object level problems as, e.g., the problem of how to deal with peerdisagreement. In situations where both problems and solutions get together, the newproblem of how meta-inductive methods influence the group's performance arises. Withthe help simulations in a setting where especially diversity highly influential, we will takea complementary view at this problem. Among the simulations is also a case of PaulFeyerabend's diversity argument, claiming that progress in science is sometimes possibleonly via diversity in or plurality of theories and methods (cf. Feyerabend 1993, p.21,p.107). Also more general simulations of investigations about the importance of diversityin order to justify some kinds of positive discrimination or diversity in interdisciplinarialresearch on cost of average competence will be modelled in the meta-inductivisticframework and investigated in detail.- Paul Feyerabend. Against method. 3. ed. London: Verso, 1993.- Gerhard Schurz. "The Meta-Inductivist's Winning Strategy in the Prediction Game: ANew Approach to Hume’s Problem". In: Philosophy of Science 75.3 (2008), pp. 278–305.- Paul Thorn and Gerhard Schurz. "Meta-Induction and the Wisdom of Crowds". In:Analyse und Kritik 35.2 (to appear)

Conor Mayo-Wilson. Games Against Nature, The Division of Cognitive Labor, and Votingon TheoriesDavid Crawford. Probability Measures and Biological FitnessAbstract: I reassess the probabilistic foundations of the received probabilisticinterpretation of biological fitness (PIF) and argue that a misguided conceptualframework is responsible for numerous problems faced by and objections posed to thisinterpretation. Since its inception, the PIF has involved probabilistic measures of fitnessthat both ignore the role of pairwise comparisons and fail to adequately distinguishbetween individual, subpopulation, and per-capita (normalized) random variables.Failure to accommodate the former factor has led the PIF to confuse the questions of"Which organism will likely have the higher average output?" and "Which organism willlikely have the higher output on average?" Vagueness regarding the latter factor hasprevented the PIF from capturing the importance of random variable features and therole of population size in natural selection processes. I present a basic framework for thePIF that takes these factors into account and in doing so strengthens the probabilisticcontent of the PIF. This reassessment also helps debunk a popular criticism of the PIF -that it implies that fitnesses are holistic or non-causal. A fresh look at the probabilitytheory underlying the PIF provides a much-needed overhaul of the fitness debate andrealigns our intuitions and basic assumptions, opening the way for a more fruitfulanalysis and expansion of a probabilistic view of biological fitness.Carlo Martini and Ulrike Hahn. Two kinds of overconfidence in scientificjudgmentAbstract: Since the work of Kahneman and Tversky on psychological biases,philosophers and social scientists have investigated at length the phenomenon of biasesin scientific judgment and decision making. The works of Faust (The limits of ScientificReasoning, 1984), Bishop and Trout (Epistemology and the Psychology of HumanJudgment, 2005; The Empathy Gap, 2009), and several others, have highlighted howscientific judgment is no less subject to biases and pitfalls than that of laymen. A specificcharge that has been brought is that of overconfidence (e.g. Angner, "Economists asexperts: Overconfidence in theory and practice," 2006).Overconfidence can generally be defined as a faulty judgment in violation of simplestatistical facts: for instance, typically more than 50% of students think their grade willbe above the mean in any given class; or else, a typical scholar thinks that his or herchances of seeing a paper rejected from a certain journal is a value above the meanacceptance rate for that journal; etc. It turns out, however, that the phenomenon ofoverconfidence is more complex than we normally think.In his essay "On Liberty", J.S. Mill claimed that "while every one well knows himself tobe fallible, few think it necessary to take any precautions against their own fallibility, oradmit the supposition that any opinion of which they feel very certain, may be one of theexamples of the error to which they acknowledge themselves to be liable." In thepassage just quoted Mill is implying that there are two types of judgments, on whichpeople typically have different degree of confidence (or sometimes overconfidence).There are judgments over the probability of a series of events, and judgments oversingle events.It is remarkable that Gigerenzer and his collaborators have observed the very samephenomenon, and tested the occurrence of overconfidence in the two types ofjudgments, which they call, respectively, "frequency judgments" and "confidencereports" (Gigerenzer et. al., "Probabilistic Mental Models: A Brunswikian Theory ofConfidence," 1991). In this paper we test the phenomenon with experiments on differentcohorts of about 100 experimental subjects. We develop on the work of Gigerenzer and

his collaborators, and we observe that the phenomenon of overconfidence is presenteven under a number of variations of the experimental setting initially used.Nonetheless, its magnitude varies greatly by inverting the probability scale fromconfidence reports elicitation to the elicitation of "counter-confidence" reports (1 -confidence). The phenomenon is not easily explainable in Gigerenzer's framework, andalternative explanations are needed.Studying overconfidence in science is important in order to assess whether thephenomenon is a truly psychological one, or a byproduct of experimental settings, whichis one of the fundamental open questions in the "Heuristics and Biases Program" initiatedby Kahneman and Tversky. Following the experiments, we try to formulate a number ofconclusions that are relevant for the philosophical literature mentioned above, inparticular the works by Faust, Bishop, and Trout. We also formulate a number of possiblestrategies for eliminating, or at least reducing, overconfidence in scientific expertreports.Katharina Kraus. Kant and the challenge of the soft sciences: The case ofpsychologyAbstract: In the Critique of Pure Reason (CpR) and in the Metaphysical Foundations ofNatural Science (MFNS), Kant seems to present a rather restrictive notion of science. Fora body of knowledge to have scientific status, it must contain necessary and universallaws, be systematically unified, and rely on mathematical principles. Yet, as we learn inthe Preface of the MFNS, the study of psychological phenomena, apparently notmathematisable and not experimentally observable in any objective way, does not fitthis strict conception.Far from removing psychology from the scientific agenda, however, this paper developsan account of empirical psychology as a scientific discipline in its own right, which is notonly compatible with Kant’s critical thinking, but which also directly follows fromepistemological considerations he provides in the Critique of Judgment (CJ). Inparticular, the conception of reflective aesthetical judgment, based on the subjectivefeelings of pleasure and displeasure, provide an epistemic model for psychologicalknowledge claims. Investigating the example of psychology more closely sheds new lighton a more inclusive conception of science in Kant, which substantially extends what iscommonly understood as Kant’s strict conception of science. Thus, it draws attention tothe fact that Kant’s philosophy of science includes a wider set of claims about scientificenquiries in general.My argument comes in two parts. Firstly, by examining Kant’s alleged argument againstthe mathematisability of psychological phenomena I show that this argument was in factdirected against a particular conception of psychological experience, which excludes anycorrelation between psychological and physical experience. Kant’s own account ofpsychological experience, I argue, is modelled on his account of aesthetic judgments andin a similar way based on subjective sensations. Thus, combining Kant’s account ofaesthetic judgments with his theory of quantification of experience, in particular, theconceptions of intensive quantities, shows that psychological experiences are in factsubjectively quantifiable. The question of how such quantifications can beintersubjectively agreed leads to the second part. Thus, secondly, I analyse Kant’salleged argument against introspection as objective experimental method of observationin psychology. This analysis reveals Kant’s concern about psychological experiments thathave no relation to any physical experience. By appealing to Kant’s account of regulativeprinciples in aesthetic judgments, I show how intersubjective knowledge claims becomeavailable in psychology. These are based on correlations we draw between psychologicalstates and physical, i.e., intersubjectively available, signs on the basis of a regulativeuse of the category of causality.

In conclusion, I suggest that Kant’s account of aesthetic judgments allows for aconception of psychology, which acknowledges its amenability to quantitative methods,while preserving conceptual space for interpretive methods.Thomas Pashby. Towards a Resolution of the Problem of RelativisticLocalizationAbstract: It is commonly supposed that there is no problem of localization in nonrelativisticquantum mechanics. The reason for this is that the problem of non-relativisticlocalization was supposedly solved early in the history of quantum mechanics: positionand momentum correspond to canonically conjugate self-adjoint operators, self-adjointoperators (even unbounded operators) admit a unique spectral resolution, and (mutuallycommuting) spectral projections of position correspond to the localization of a systemwithin a region of space. Although these notions benefited from notable refinements overthe years, in essence little has changed since von Neumann’s results of the late 1920s.The culmination of these refinements was the paper of Wightman (1962), who provedthat every Galilean invariant system has a Euclidean covariant localization system.However, in that same paper Wightman proved that Poincare invariant systems alsopossess a Euclidean covariant localization system, so why did he not thereby solve theproblem of relativistic localization? The reason is that the localization scheme he thusdefined corresponds to the Newton-Wigner position operator, and so has propertiesseemingly at odds with its billing as a relativistic localization system. The first of theseproblems is that Newton-Wigner localization fails to be Lorentz covariant. The second ofthese is the failure of projections associated with spacelike separated spatial regions tocommute, known as the failure of microcausality.I begin with an examination of the use of such instantaneous localization systems todescribe realistic particle detection experiments, and argue that they are ill-suited to doso. This is because particle detectors must operate for more than an instant. But, as Ishow, the instantaneous projections that such localization systems supply cannot beused to define non-instantaneous projection operators. As a result, I claim that realisticparticle detection experiments require description by more general mathematical objectsthan projections: positive operators. Following a suggestion of Haag (1996), I argue thatthis indicates that such operators do not describe probabilities for the localization of anentire system, but rather the spatio-temporally located events in whose production thesystem plays a role.The problem faced by the use of these operators is that in order to supply meaningfulprobabilities for the occurrence of these events a suitable notion of normalization isrequired. To resolve this difficulty I make use of the technique of operator normalizationof Brunetti and Fredenhagen (2002), arguing that the Positive Operator Valued Measuresthat result describe just the sort of experiments in question.With this suggestion in hand, an attempt is made to diagnose and resolve the problemsof relativistic localization. I prove a ‘no-go' result which demonstrates that the problemof covariance arises from the requirement that a system of localization should beinstantaneously sharp (commuting). I suggest that the problem of covariance can beovercome by defining instead an unsharp (non-commuting) but relativistically covariantnotion of localization – a “space-time localization scheme.” Although this scheme doesnot satisfy microcausality, I argue that the consequences for locality for an unsharpscheme are less dire than is often thought.

Matt Farr. Initial Conditions and the Direction of TimeAbstract: A central issue in the philosophy and physics of time is whether contemporaryfundamental physics implies the existence of an objectively privileged temporal direction.Philosophical consideration of this issue has predominantly focused on whetherfundamental physical theories are time-reversal invariant: if they are time-reversal noninvariantthey imply a direction of time (e.g. Arntzenius 1995, Savitt 1996, North 2008);if they are time-reversal invariant, they imply the absence of a direction of time (e.g.Reichenbach 1956, Price 1996). Both such inferences are subject to basic objections inthe literature. This paper takes an alternative approach to the issue by focusing insteadon whether time-asymmetric explanations by time-reversal invariant physical theoriescan imply the existence of a privileged temporal direction.In the recent literature on time-asymmetric phenomena in physics, multiple authorshave claimed that there is an important time-asymmetric feature of explanation withinthe domain of time-reversal invariant physical theories insofar as placing constraints oninitial, rather than final conditions purportedly offers an explanatory advantage in suchcases as cosmology, statistical mechanics, classical electrodynamics, and quantummechanics. In particular, Arntzenius (1997) and Maudlin (2007) have argued, in thecases of the wave-function collapse and the thermodynamic time asymmetryrespectively, that these phenomena are best explained by the relevant physical theories(quantum mechanics and statistical mechanics) on the assumption that there is somesort of metaphysical dependence of states of a system upon earlier and not later statesof a system. I assess the common structure of these arguments and whether such anargument can justify the claim that the universe possesses a unidirectional temporaldependence (UTD) structure that corresponds to the philosophical accounts of timedirection of (e.g.) Lewis (1979), Mellor (1980), Horwich (1987), Maudlin (2007).The UTD argument’s success requires that the data in question (i.e. the physicalphenomenon – collapse and entropy-increase) be accountable by the relevant theory inaddition to a constraint on the system’s initial condition, and that no analogousconstraint on the ‘final’ condition can yield an empirically adequate account. However, Iargue that in general this apparent special sensitivity of systems with respect to initialrather than final conditions is due to the way the data under consideration is selected,and disappears if the data is collected in a properly time-symmetric way. In the case ofquantum mechanics, I show that the apparent asymmetry between initial and finalconditions is overcome by selecting the ensembles under consideration in terms of boththeir initial and final states rather than just their initial state, with reference to work byYakir Aharonov on time-symmetric quantum mechanics. I furthermore show thatconsidering only ‘postselected’ ensembles leads to the explanatory asymmetry betweeninitial and final conditions being reversed. However, applying similar reasoning to thestatistical mechanical case is complicated due to the relationship between macroscopic(thermodynamic) and microscopic (mechanical) observables. I end with suggestions ofhow to overcome the apparent explanatory asymmetry between low entropy and highentropy macroconstraints, and argue that applying only postselection in this case leadsto illuminating results.Malte Dahlgrün. On what emotions really are: Griffiths reassessedAbstract: Philosophers of emotion widely view Paul Griffiths’ "What Emotions Really Are"(1997) as the classic scientifically informed statement of the idea that emotion, andmany emotions, fail to form a natural kind. Focusing on this book for present purposes, Ioffer a fundamental reassessment of Griffiths’ central claims on emotions, which have inessence not been retracted in later writings of his.1. Griffiths’ generic eliminativist claim – that our folk category of emotion fails to form anatural kind – is not an original or underdog idea which points the way to a future

psychology of emotion, as he has suggested. Rather, it is a long-established idea inemotion psychology which could even be argued to form the default position in the field.In any event, emotion scientists do not tend to treat “the emotions” as a unitarycategory to which novel findings can be reliably extrapolated from samples (i.e., as anatural kind).2. Griffiths’ specific claim, on the other hand – that many folk categories of specificemotions fail to form natural kinds – quite drastically misrepresents his core empiricalassumptions. By virtue of advocating basic-emotions theory in the Darwin-Ekmantradition, Griffiths in fact endorses the paradigm of a view on which many emotions areindeed natural kinds.3. The true challenge to viewing emotions as natural kinds comes from dimensionaltheories. Oddly, the longstanding, broad psychological research program pursued bysuch theories is ignored almost entirely by Griffiths. But it is between these andcategorical, “basic-emotions” approaches where the real debate lies regarding emotionsand natural kinds. Time permitting, I will attempt to roughly sketch the current state ofthis debate.4. Finally, the positive taxonomy offered by Griffiths for the emotional realm lacksadequate support. He suggests a fundamental division into basic emotions (BEs) andhigher cognitive emotions (HCEs), with “socially sustained pretense emotions” added asa third category in dubious standing. 4.1. I argue, firstly, that there are no pretenseemotions. At least Griffiths has not begun to provide the requisite arguments for positingthem. 4.2. Secondly, and of wider relevance, Griffiths offers scant support for his theorypitting HCEs against BEs. However, it is rather implausible to assume that BEs and HCEsform the discontinuous realms that Griffiths confidently assumes them to be. Severalconsiderations from psychophysiology and evolutionary homologies can be adduced tothis effect.Raney Folland. Doxastic Attitudes Governed by a Principle of CoherenceAbstract: When we attempt to account for the entirety of human behavior, it has beenproposed that we must look not only to belief, but also to something else. This needbecomes evident when we think of cases in which we are disposed to act in ways thatare discordant with our conscious beliefs. That is, we might believe that all races areequal (P), yet when we examine our automatic and implicit behaviors, they point to anattitude of inequality (not-P). How do we explain this phenomenon? Gendler (2008)argues that if we want to take seriously how human minds work, and we want to savebelief, we must make conceptual room for the addition of unconscious beliefs – or aliefs.Under Gendler’s theory, alief is defined as arational, automatic, associative, and not atwill, while belief is defined as rational and reality sensitive. According to Gendler, discordarises between P and not-P when the content of an alief conflicts with the content of aconscious belief.But Schwitzgebel (2010) objects to the rigid division Gendler creates between alief andbelief. He argues that belief cannot be so narrowly defined, claiming that Gendler’s viewartificially separates rational and thoughtful responses from habitual, automatic, andassociative responses. Instead Schwitzgebel argues that instances of P and not-P shouldbe thought of as states of in-between belief. According to Schwitzgebel, in-betweenbeliefs are only partly possessed, fail to penetrate the subject’s entire dispositionalstructure, and occur under circumstances when it is not quite right to claim that onebelieves, nor is it quite right to claim that one fails to believe.But in making conceptual room for in-between belief and alief to function alongsidebelief, I argue that Schwitzgebel and Gendler both mischaracterize what it means tobelieve. That is, they both overlook another dimension to beliefs governed by different

ules and principles. I advance the idea that this overlooked dimension is constituted byimplicit beliefs formed according to a deeply rooted, inherent principle in the unconsciousmind: coherence. This principle allows the subject to neglect ambiguity and suppressdoubt so that a neat and more tidy picture of the world emerges. I propose several waysin which coherence is achieved in the acquisition of beliefs: (i) by filtering out inputsfrom the external environment that do not cohere with our existing belief set; (ii) byfilling in gaps where evidence is incomplete, ambiguous, or contradictory; (iii) byselectively choosing evidence in the external environment that coheres with an occurrentbelief, and (iv) by jumping to conclusions on the basis of limited evidence.This principle dictates that while we may consciously, explicitly hold one belief, many ofour automatic and implicit behaviors are formed according to these tenets of coherence.Finally, I claim that this new theory of beliefs provides a better explanation to theoriginal problem presented by Gendler and Schwitzgebel: instances in which we professto believe P, but act according to not-P.Sabina Leonelli. What counts as the context of scientific inquiry?Abstract: Many philosophers have stressed the importance of the notion of ‘context’ inassessing the background, development and results of scientific inquiry. Arguments foror against context-dependence span scholarship on pluralism, theory choice andmodelling, as well as long-standing debates such as that on scientific realism and antirealism.Recent scholarship on the nature of scientific evidence, the process ofunderstanding and the travel of facts has also emphasised the importance of the multipleenvironments within which scientific knowledge, in its various forms, is developed,disseminated and used. Nevertheless, the notion of context remains under-theorisedwithin the philosophy of science. This is particularly true when context is taken toencompass the conditions under which any one specific line of inquiry is developed - acharacterisation that focuses on the unique circumstances in which instruments,materials, models and policies are developed and used in science, rather than on longstandingand cohesive research traditions as in Kuhn’s paradigms, Lakatos’ researchprogrammes or even Galison’s trading zones. Starting with Hans Reichenbach’sreflections on contexts of discovery and justification, and using insights from JohnDewey’s theory of inquiry (particularly his views on ‘situations’), I reflect on whatconstitutes context in scientific research and how this notion can be used to betterunderstand and investigate, rather than dismiss or discount, the social and materialnature of scientific inquiry. I ground my discussion on the empirical analysis on a specificset of scientific practices: those involved in the circulation, integration and interpretationof data that are disseminated through digital technologies such as online databases. Thiscase illustrates that what counts as context for data and their interpretation at any pointof their journeys can vary considerably, and yet these variations matters enormously toboth the development and the results of scientific inquiry.

More magazines by this user
Similar magazines