1. I got one of my papers (on the relata of the causal relation) accepted to present at the University of Calgary Philosophy Graduate Conference in the fall.
2. I ran 5 miles, in a row, without passing out, and without anyone chasing me. 44:14 for the record.
7.27.2007
A Priori Knowledge
Central to views regarding a priori knowledge is how such beliefs gain their epistemic justification independent of experience. Central to Alvin Plantinga’s account of what it takes for one to have a priori justification is for one to see the truth of the proposition in question. According to Plantinga, to see that a proposition is true is to believe that it is true, and necessarily true, to form this belief immediately (not on the basis of other beliefs, memory, or testimony), to form this belief with a particular, hard to describe, phenomenology, and to do so while not malfunctioning. One can also gain a priori justification for a proposition by seeing that it follows from a proposition that one sees to be true. When one sees that a proposition is true or that it follows from a proposition that one sees to be true, and the proposition is true, then one has a priori knowledge of it.
There are a couple of problems with Plantinga’s account. First, it seems that Plantinga has not accurately captured what it is for one to see that a proposition is true. Plantinga claims that believing a proposition is a necessary condition for seeing that it is true, but this does not seem right. Propositions can be seen to be true to me even though I do not believe them (such as that the top line in the Muller-Lyer illusion is longer, or Frege’s Axiom V). Similarly, one can believe a proposition without it seeing that it is true, such as when one has seen a mathematical derivation for a theorem that is too complicated to seem true. Beliefs are typically formed on the basis of seeing that a proposition is true, but the seeing and the believing are distinct relationships one has to the proposition. As such, Plantinga gets the nature of seeing the truth.
A second problem with Plantinga’s analysis concerns the modal requirements for gaining a priori justification. Plantinga claims that to see that a proposition is true one must not just believe it, but believe that it is necessarily true. It seems that one can come to know a proposition in an a priori fashion while at the same time being ignorant of modal concepts like necessity or having mistaken views regarding necessity according to which the proposition in question is not necessary (I am assuming here, with Plantinga, that all propositions that are known a priori are necessary). For instance, I could justifiably believe that mathematical propositions, like 2+2=4, do not have their truth values necessarily. Even though I am mistaken in this regard, it still seems that I can see the truth of 2+2=4 and/or that I can know that 2+2=4 a priori. In addition, Plantinga’s appeal to believing that the proposition in question is a necessary truth threatens an infinite regress. Presumably, the belief that the proposition in question is necessarily true is one that must have a priori justification (if not, then it is hard to see how it could contribute to the a priori justification of the proposition in question). If so, however, then this belief too must be seen to be true but in order to see that it is true one must also believe that it is necessarily true. The ‘necessarily’ modifiers will quickly compound leading to an infinite regress and to propositions that are plausibly too complicated to be believed by human minds. This reveals another flaw in Plantinga’s account.
George Bealer relies on intuitions, or intellectual seemings, to provide the a priori justification required for a priori knowledge. For S to have an intuition that P is for it to seem to S that P. Thus, intuitions are conscious episodes. Bealer distinguishes intuitions from beliefs for the reasons mentioned above so he avoids one problem that encountered Plantinga. However, Bealer believes that it is rational intuitions that do the work for a priori justification, and according to Bealer, a rational intuition presents a proposition as necessary – it must seem to S that P must be true. Worries arise here, like above, since it seems as though one can have a priori justification without the modal concepts Bealer appeals to or if one had a mistaken view of the relevant modal concepts (as described above).
Another contemporary proposal claims that the requirements of concept possession can provide the needed a priori justification. Paul Boghossian claims that in order to possess certain concepts one must be disposed to reason in certain ways. For instance, in order to possess the concept ‘conditional’ one must be disposed to reason according to modus ponens. The claim is that such inferences are thus justified in virtue of their being requisite for the possession of certain concepts. Propositions can then be known in an a priori fashion when they are the conclusions of such justified inferences.
Several problems are apparent with this account. First, it seems doubtful that one must be disposed to reason in certain ways in order to possess certain concepts. There does not appear to be anything incoherent with the idea of a wholly passive mind that possessed concepts but was unable to do any mental acts such as infer. So, it is doubtful that having such dispositions to reason is indeed requisite for the possession of concepts.
Second, even if such dispositions were requisite, this fact does not epistemically justify their use. Doing the work in order to possess certain concepts may be rational in a means/ends sense, but it does nothing to epistemically justify or entitle one to make such inferences. We could imagine a case where S is offered some epistemically valuable end if S performs the inference from P or Q to P and Q. Performing such an inference would be beneficial for S, but this fact in no way epistemically justifies S in performing the inference.
Finally, even if the inferences were justified a significant problem remains for Boghossian’s account. If the conclusions of such justified inferences are supposed to be justified a priori, then we need to have premises that are justified a priori. All that Boghossian’s account even attempts to do is to justify the inferences, but this is inadequate to the task at hand – the task of accounting for a priori knowledge.
There are a couple of problems with Plantinga’s account. First, it seems that Plantinga has not accurately captured what it is for one to see that a proposition is true. Plantinga claims that believing a proposition is a necessary condition for seeing that it is true, but this does not seem right. Propositions can be seen to be true to me even though I do not believe them (such as that the top line in the Muller-Lyer illusion is longer, or Frege’s Axiom V). Similarly, one can believe a proposition without it seeing that it is true, such as when one has seen a mathematical derivation for a theorem that is too complicated to seem true. Beliefs are typically formed on the basis of seeing that a proposition is true, but the seeing and the believing are distinct relationships one has to the proposition. As such, Plantinga gets the nature of seeing the truth.
A second problem with Plantinga’s analysis concerns the modal requirements for gaining a priori justification. Plantinga claims that to see that a proposition is true one must not just believe it, but believe that it is necessarily true. It seems that one can come to know a proposition in an a priori fashion while at the same time being ignorant of modal concepts like necessity or having mistaken views regarding necessity according to which the proposition in question is not necessary (I am assuming here, with Plantinga, that all propositions that are known a priori are necessary). For instance, I could justifiably believe that mathematical propositions, like 2+2=4, do not have their truth values necessarily. Even though I am mistaken in this regard, it still seems that I can see the truth of 2+2=4 and/or that I can know that 2+2=4 a priori. In addition, Plantinga’s appeal to believing that the proposition in question is a necessary truth threatens an infinite regress. Presumably, the belief that the proposition in question is necessarily true is one that must have a priori justification (if not, then it is hard to see how it could contribute to the a priori justification of the proposition in question). If so, however, then this belief too must be seen to be true but in order to see that it is true one must also believe that it is necessarily true. The ‘necessarily’ modifiers will quickly compound leading to an infinite regress and to propositions that are plausibly too complicated to be believed by human minds. This reveals another flaw in Plantinga’s account.
George Bealer relies on intuitions, or intellectual seemings, to provide the a priori justification required for a priori knowledge. For S to have an intuition that P is for it to seem to S that P. Thus, intuitions are conscious episodes. Bealer distinguishes intuitions from beliefs for the reasons mentioned above so he avoids one problem that encountered Plantinga. However, Bealer believes that it is rational intuitions that do the work for a priori justification, and according to Bealer, a rational intuition presents a proposition as necessary – it must seem to S that P must be true. Worries arise here, like above, since it seems as though one can have a priori justification without the modal concepts Bealer appeals to or if one had a mistaken view of the relevant modal concepts (as described above).
Another contemporary proposal claims that the requirements of concept possession can provide the needed a priori justification. Paul Boghossian claims that in order to possess certain concepts one must be disposed to reason in certain ways. For instance, in order to possess the concept ‘conditional’ one must be disposed to reason according to modus ponens. The claim is that such inferences are thus justified in virtue of their being requisite for the possession of certain concepts. Propositions can then be known in an a priori fashion when they are the conclusions of such justified inferences.
Several problems are apparent with this account. First, it seems doubtful that one must be disposed to reason in certain ways in order to possess certain concepts. There does not appear to be anything incoherent with the idea of a wholly passive mind that possessed concepts but was unable to do any mental acts such as infer. So, it is doubtful that having such dispositions to reason is indeed requisite for the possession of concepts.
Second, even if such dispositions were requisite, this fact does not epistemically justify their use. Doing the work in order to possess certain concepts may be rational in a means/ends sense, but it does nothing to epistemically justify or entitle one to make such inferences. We could imagine a case where S is offered some epistemically valuable end if S performs the inference from P or Q to P and Q. Performing such an inference would be beneficial for S, but this fact in no way epistemically justifies S in performing the inference.
Finally, even if the inferences were justified a significant problem remains for Boghossian’s account. If the conclusions of such justified inferences are supposed to be justified a priori, then we need to have premises that are justified a priori. All that Boghossian’s account even attempts to do is to justify the inferences, but this is inadequate to the task at hand – the task of accounting for a priori knowledge.
7.26.2007
Fight Club: Regina Chapter
As you can see from the picture a couple of posts ago, Thomas is only a few years away from attending his first hockey fight camp. (thanks to Trent for the story)
After all, this guy seems to be the guy to learn from. It's only fitting that a prarie boy would throw so many haymakers! (see post above)
7.25.2007
I've Been SImpsonized!
You can see it here. I can't figure out how to download the picture though, so if you can help me there, let me know.
UPDATE: ok, I just went to the link and that is totally not the picture it showed me before! This site has totally frustrated me after offerring me so much.
UPDATE: ok, I just went to the link and that is totally not the picture it showed me before! This site has totally frustrated me after offerring me so much.
7.22.2007
Kapow!
7.20.2007
Duck-Rabbit
Virtue Epistemology
Virtue epistemology utilizes virtues in addressing the prominent problems in epistemology. A distinction is made between moral virtues and epistemic or cognitive virtues. Within the virtue epistemology camp, there is a divide between reliabilist and responsibilist understandings of epistemic virtues. I will focus here on a reliabilist account. Roughly put, an epistemic virtue is stable disposition to achieve certain results (true beliefs) in certain circumstances. More precisely, a mechanism M for generating or maintaining beliefs is an epistemic virtue if and only if M is an ability to believe true propositions and avoid believing false ones within a field of propositions F when one is in a set of circumstances C.
The virtue epistemologist’s claim, then, is that a proposition p is epistemically justified for S if and only if S’s believing p is the result of an epistemic virtue of S. Understood as such, virtue epistemology is a type of process reliabilism. By specifying which type of processes can produce an epistemically justified belief, virtue epistemologists attempt to provide an account of epistemic justification (and often knowledge) that avoids the problems of ‘simple’ reliabilism.
The New Evil Demon Problem was a problem for reliabilism since in a world where one is massively deceived it seems as though one can nonetheless have justified beliefs despite the unreliability of the processes that produce them. This appears to be a problem for the virtue epistemologist as well, since one can believe propositions on the basis of what seems to us to be epistemic virtues (and seem to be epistemically justified in those beliefs), but believing in such a way does not lead to true beliefs in the evil demon world.
Ernest Sosa’s response as a virtue epistemologist is to relativize epistemic justification to an environment. In other words, the individual in the demon world is epistemically justified in her belief since she utilized cognitive faculties that are epistemic virtues in our environment. Since coming to beliefs in such a way would be reliable in our environment, and would be the result of an epistemic virtue, we consider the demon worlder to be justified. Epistemic justification is thus relativized to the actual world.
The above response is unsatisfactory, however. Sosa’s response does not account for all of our intuitions here. To see this we can imagine that we are being deceived by an evil demon as well. In such a scenario, coming to beliefs by way of seeming epistemic virtues is not a reliable way to come to beliefs. According to Sosa, our beliefs are not justified in such a scenario, but we still think that they are. Our intuitions are that such beliefs are epistemically justified regardless of whether one is in a demon world, even if the actual world is a demon world. This problem remains for virtue epistemologists.
A second problem for reliabilism concerns reliable belief forming processes at work in an individual that has reason to doubt that his processes are reliable. In such a scenario reliabilism has it that he is justified in the beliefs produced by the reliable process, but is seems as though the evidence that he has regarding the unreliability of these processes renders the resultant beliefs unjustified. This problem too seems to remain for the virtue epistemology response. A belief could be the product of an epistemic virtue, yet one have evidence against it being the product of such a virtue. Can virtue epistemology get the right result that the resultant belief is unjustified?
Sosa attempts to get this result by making a distinction between animal knowledge and reflective knowledge. This is itself a cost, since it seems that we have only one concept of knowledge. Positing two such concepts seems to be a last resort. According to Sosa, to have animal knowledge, one must believe out of epistemic virtue which makes the resultant belief apt, but to have reflective knowledge one must believe out of epistemic virtue and be aware of so doing which makes the belief justified (ie. one must also believe out of epistemic virtue that her [first-order] belief was produced in an epistemic virtue). To be justified, one must recognize regarding her belief that p that it was produced by an epistemic virtue (ie. she must recognize (i) that p falls into the relevant range of propositions, and (ii) that she is in one of the relevant circumstances for her belief producing mechanism to be reliable).
Applied to the case where one has misleading evidence regarding the reliability of the belief producing process, on this account one has animal knowledge (the belief is apt), but lacks reflective knowledge (the belief is not justified). Although this account gets the right result regarding the case of misleading evidence, it has problematic consequences. The problem is that very few people have any beliefs about their beliefs such as that there belief was formed from epistemic virtue. Whereas people may recognize that there belief was formed on the basis of perception, they do not believe that the relevant proposition falls within a certain range of propositions or that they are in a circumstance among a set of acceptable circumstances such that perception is reliable for such propositions in such circumstances. Such propositions are not typically believed, even dispositionally. As a result, Sosa’s account implies that all such individuals (most individuals) are not epistemically justified in their beliefs. There beliefs may be apt, but they are not justified. However, it seems that most individuals are epistemically justified in at least a good number of beliefs, or minimally, that their epistemic standing to such propositions is better than the aptness required for animal knowledge. Such meta-beliefs simply do not appear to be required for epistemic justification.
The virtue epistemologist’s claim, then, is that a proposition p is epistemically justified for S if and only if S’s believing p is the result of an epistemic virtue of S. Understood as such, virtue epistemology is a type of process reliabilism. By specifying which type of processes can produce an epistemically justified belief, virtue epistemologists attempt to provide an account of epistemic justification (and often knowledge) that avoids the problems of ‘simple’ reliabilism.
The New Evil Demon Problem was a problem for reliabilism since in a world where one is massively deceived it seems as though one can nonetheless have justified beliefs despite the unreliability of the processes that produce them. This appears to be a problem for the virtue epistemologist as well, since one can believe propositions on the basis of what seems to us to be epistemic virtues (and seem to be epistemically justified in those beliefs), but believing in such a way does not lead to true beliefs in the evil demon world.
Ernest Sosa’s response as a virtue epistemologist is to relativize epistemic justification to an environment. In other words, the individual in the demon world is epistemically justified in her belief since she utilized cognitive faculties that are epistemic virtues in our environment. Since coming to beliefs in such a way would be reliable in our environment, and would be the result of an epistemic virtue, we consider the demon worlder to be justified. Epistemic justification is thus relativized to the actual world.
The above response is unsatisfactory, however. Sosa’s response does not account for all of our intuitions here. To see this we can imagine that we are being deceived by an evil demon as well. In such a scenario, coming to beliefs by way of seeming epistemic virtues is not a reliable way to come to beliefs. According to Sosa, our beliefs are not justified in such a scenario, but we still think that they are. Our intuitions are that such beliefs are epistemically justified regardless of whether one is in a demon world, even if the actual world is a demon world. This problem remains for virtue epistemologists.
A second problem for reliabilism concerns reliable belief forming processes at work in an individual that has reason to doubt that his processes are reliable. In such a scenario reliabilism has it that he is justified in the beliefs produced by the reliable process, but is seems as though the evidence that he has regarding the unreliability of these processes renders the resultant beliefs unjustified. This problem too seems to remain for the virtue epistemology response. A belief could be the product of an epistemic virtue, yet one have evidence against it being the product of such a virtue. Can virtue epistemology get the right result that the resultant belief is unjustified?
Sosa attempts to get this result by making a distinction between animal knowledge and reflective knowledge. This is itself a cost, since it seems that we have only one concept of knowledge. Positing two such concepts seems to be a last resort. According to Sosa, to have animal knowledge, one must believe out of epistemic virtue which makes the resultant belief apt, but to have reflective knowledge one must believe out of epistemic virtue and be aware of so doing which makes the belief justified (ie. one must also believe out of epistemic virtue that her [first-order] belief was produced in an epistemic virtue). To be justified, one must recognize regarding her belief that p that it was produced by an epistemic virtue (ie. she must recognize (i) that p falls into the relevant range of propositions, and (ii) that she is in one of the relevant circumstances for her belief producing mechanism to be reliable).
Applied to the case where one has misleading evidence regarding the reliability of the belief producing process, on this account one has animal knowledge (the belief is apt), but lacks reflective knowledge (the belief is not justified). Although this account gets the right result regarding the case of misleading evidence, it has problematic consequences. The problem is that very few people have any beliefs about their beliefs such as that there belief was formed from epistemic virtue. Whereas people may recognize that there belief was formed on the basis of perception, they do not believe that the relevant proposition falls within a certain range of propositions or that they are in a circumstance among a set of acceptable circumstances such that perception is reliable for such propositions in such circumstances. Such propositions are not typically believed, even dispositionally. As a result, Sosa’s account implies that all such individuals (most individuals) are not epistemically justified in their beliefs. There beliefs may be apt, but they are not justified. However, it seems that most individuals are epistemically justified in at least a good number of beliefs, or minimally, that their epistemic standing to such propositions is better than the aptness required for animal knowledge. Such meta-beliefs simply do not appear to be required for epistemic justification.
7.19.2007
Seeing and Still Not Believing
Apparently I live nearby to the Jell-O museum (yes, Bill Cosby has visited). Anyone want to visit me now?
Try to get this:
March 17, 1993, technicians at St. Jerome hospital in Batavia test a bowl of lime Jell-O with an EEG machine and confirm the earlier testing by Dr. Adrian Upton that a bowl of wiggly Jell-O has brain waves identical to those of adult men and women.
7.17.2007
Monkey Man
7.16.2007
Gambling v. Investing
So, I have heard a fair number of people make a distinction between gambling and investing (ie. gambling is wrong, investing is at least permissible). I find it hard to find a clear distinction between the two. Here are a couple distinctions you might try to make, and what I think is wrong with them.
1. Gambling is an attempt to 'get rich quick'.
Well, this might be so for most gamblers, but there is nothing about gambling that makes it about getting rich quick. After all, one could make long term bets (ie. betting on what will happen in 2020) or bets that will not be payed out for a while. There does not appear to be anything essential to gambling about getting rich quick. It is hard to imagine that those who oppose gambling oppose only the get rich quick kinds of gambling, and would have no problem with a bet that gets paid off far into the future. The distinction must lie elsewhere.
2. Gambling is associated with a shady lifestyle.
Whether this is so, it is irrelevant to the distinction at hand unless it can be shown that the shady lifestyle is shady in virtue of gambling. In other words, the association may point to a problem, but it is not itself the problem. In addition, the lifestyle associated with investing is not much better. Both worlds are full of greed and ego and have stories of people who lost it all (think Great Depression). It is difficult to see how the two can be distinguished in this way.
3. Gambling relies on chance.
This is perhaps the best shot at a clear distinction, but it too has problems. First of all, investing also relies on chance. Chance events can greatly affect the value of a company's stocks (ie. a plane crash hurts that airline). So, chance has a great deal to do with one's investments. One might reply that events like airplane crashes, earthquakes and such are not chance events, but then it is hard to see how the roll of a dice is a chance event either. Even if it is granted that gambling relies on chance in a way that investing does not, it remains to be seen how such a reliance makes gambling a worse activity. Many other activities rely on chance that we do not have a problem with.
All of this is not to encourage or discourage gambling or investing, but simply an ongoing failure to find a suitable distinction between the two.
Next up: searching for a distinction between saving money and hoarding money. They sound so different, but are they?
1. Gambling is an attempt to 'get rich quick'.
Well, this might be so for most gamblers, but there is nothing about gambling that makes it about getting rich quick. After all, one could make long term bets (ie. betting on what will happen in 2020) or bets that will not be payed out for a while. There does not appear to be anything essential to gambling about getting rich quick. It is hard to imagine that those who oppose gambling oppose only the get rich quick kinds of gambling, and would have no problem with a bet that gets paid off far into the future. The distinction must lie elsewhere.
2. Gambling is associated with a shady lifestyle.
Whether this is so, it is irrelevant to the distinction at hand unless it can be shown that the shady lifestyle is shady in virtue of gambling. In other words, the association may point to a problem, but it is not itself the problem. In addition, the lifestyle associated with investing is not much better. Both worlds are full of greed and ego and have stories of people who lost it all (think Great Depression). It is difficult to see how the two can be distinguished in this way.
3. Gambling relies on chance.
This is perhaps the best shot at a clear distinction, but it too has problems. First of all, investing also relies on chance. Chance events can greatly affect the value of a company's stocks (ie. a plane crash hurts that airline). So, chance has a great deal to do with one's investments. One might reply that events like airplane crashes, earthquakes and such are not chance events, but then it is hard to see how the roll of a dice is a chance event either. Even if it is granted that gambling relies on chance in a way that investing does not, it remains to be seen how such a reliance makes gambling a worse activity. Many other activities rely on chance that we do not have a problem with.
All of this is not to encourage or discourage gambling or investing, but simply an ongoing failure to find a suitable distinction between the two.
Next up: searching for a distinction between saving money and hoarding money. They sound so different, but are they?
NFL Predictions
AFC EAST . . . . . . NFC EAST
1. Patriots 13-3 . . . . 1. Cowboys 10-6
2. Jets 9-7 . . . . . . 2. Eagles 8-8
3. Dolphins 8-8 . . . . 3. Giants 7-9
4. Bills 5-13 . . . . . 4. Redskins 5-11
AFC WEST . . . . . . NFC WEST
1. Chargers 12-4 . . . 1. Rams 11-5
2. Broncos 9-7 . . . . 2. Cardinals 9-7
3. Chiefs 6-10 . . . . 3. Seahawks 8-8
4. Raiders 4-12 . . . . 4. 49ers 7-9
AFC NORTH . . . . . NFC NORTH
1. Ravens 13-3 . . . . 1. Bears 10-6
2. Bengals 11-5 . . . . 2. Lions 10-6
3. Steelers 7-9 . . . . 3. Packers 7-9
4. Browns 4-12 . . . . 4. Vikings 5-11
AFC SOUTH . . . . . NFC SOUTH
1. Colts 12-4 . . . . . 1. Saints 11-5
2. Jaguars 10-6 . . . . 2. Panthers 10-6
3. Texans 5-11 . . . . 3. Buccaneers 5-11
4. Titans 2-14 . . . . 4. Falcons 3-13
1. Patriots 13-3 . . . . 1. Cowboys 10-6
2. Jets 9-7 . . . . . . 2. Eagles 8-8
3. Dolphins 8-8 . . . . 3. Giants 7-9
4. Bills 5-13 . . . . . 4. Redskins 5-11
AFC WEST . . . . . . NFC WEST
1. Chargers 12-4 . . . 1. Rams 11-5
2. Broncos 9-7 . . . . 2. Cardinals 9-7
3. Chiefs 6-10 . . . . 3. Seahawks 8-8
4. Raiders 4-12 . . . . 4. 49ers 7-9
AFC NORTH . . . . . NFC NORTH
1. Ravens 13-3 . . . . 1. Bears 10-6
2. Bengals 11-5 . . . . 2. Lions 10-6
3. Steelers 7-9 . . . . 3. Packers 7-9
4. Browns 4-12 . . . . 4. Vikings 5-11
AFC SOUTH . . . . . NFC SOUTH
1. Colts 12-4 . . . . . 1. Saints 11-5
2. Jaguars 10-6 . . . . 2. Panthers 10-6
3. Texans 5-11 . . . . 3. Buccaneers 5-11
4. Titans 2-14 . . . . 4. Falcons 3-13
What is Evidence?
Timothy Williamson claims that something is evidence for a hypothesis when it speaks in favor of it (raises the probability of it) and it has some credible standing. The credible standing that Williamson finds necessary for evidence is knowledge. The claim is that one’s total possible evidence is comprised of what one knows and only what one knows constitutes one’s evidence at a time. In other words, S’s evidence is S’s knowledge.
S’s total evidence with regard to a hypothesis then is that knowledge S has which raises the probability of that hypothesis (the hypothesis’s probability is higher when it is conditionalized on S’s knowledge).
Williamson’s account is overly restrictive in what it requires for credible standing. Suppose that Joe has a perceptual experience of a blue book being in front of him and has good reasons to trust his perceptual faculties in this case. Presumably, and intuitively, Joe has evidence that there is a blue book in front of him. However, on Williamson’s account, this is only the case if Joe knows some propositions that support this claim. Though Joe may know a number of propositions that support his perceptual faculties being reliable, he can fail to know that he seems to see a blue book in front of him by failing to form that belief (though it is justified and true). This kind of doxastic failure seems possible, yet if it is possible, then Joe does not know that he seems to see a blue book in front of him, so his perceptual experience does not produce any evidence for him on this score. As such, Joe’s evidence would not support the proposition that there is a blue book in front of him according to Williamson’s account. This is the intuitively wrong result and shows that Williamson’s account is overly restrictive in this way.
Richard Feldman defines a person’s total possible evidence as all and only that information that is stored in that person’s mind at that time. Of this set, a person’s total evidence is that part of their total possible evidence that is available (meets some psychological accessibility constraint) and acceptable (meets some epistemic acceptability constraint). Feldman sees the evidence that passes the accessibility constraint as the evidence that S is currently thinking of (the conscious and perhaps unconscious beliefs, as well as the non-doxastic mental states that one is aware that they are in). Evidence that passes the epistemic accessibility constraint are those available items that are, or could be, justifiably believed.
Feldman’s account is not overly restrictive in requiring knowledge as Williamson’s account is. Applied to the case of Joe above, even if Joe does not form the belief that he seems to see a blue book in front of him he is nonetheless justified in so believing. As such, this is part of his evidence on Feldman’s account. Therefore, Joe’s evidence does support there being a blue book in front of him – the right result. In addition, Feldman’s account is not too lax by allowing mere beliefs to count as evidence. If mere beliefs counted as evidence (as in some coherentist theories), then propositions that one had no business believing would affect what that individual epistemically ought to believe – and this cannot be. Having the epistemic acceptability constraint set to justification avoids being too lax and too strict. Feldman’s account of evidence squares with our intuitions about what evidence a person has at a time.
An objection to this account claims that according to it one’s evidence can support a proposition even if that individual has important counter-evidence stored in his mind but is simply not thinking about it – particularly if it would be easy for him to recall these things. In such a case there is something wrong with the individual believing the proposition, but he is believing in accord with his evidence according to Feldman’s analysis.
The problem with so believing, however, is not that the individual is really failing to believe according to his evidence, but that he failed to act responsibly in forming his belief – he failed to think carefully about the matter and call to mind the relevant information stored in his mind. One can believe according to the evidence and still be blameworthy for the way that they conducted their investigation. This is such a case. Nothing here counts against Feldman’s account of evidence possession.
Another objection to this account of evidence is that according to it, there are many propositions for which we currently do not have any evidence (there is nothing we are currently entertaining that pertains to them) yet intuitively we know some of these propositions to be true. Take for example the proposition that Bush is president. Before it was mentioned, you probably had no thoughts about the matter, yet intuitively you still knew that Bush was president. Such a case seems to go against Feldman’s account of evidence, but the apparent problem can be explained away.
We can distinguish occurrent and dispositional senses of knowledge. Whereas what one occurrently knows is determined by the evidence one possesses, what one dispositionally knows is determined by what evidence that individual would possess were he to think about it. Thus, you dispositionally know that Bush is president since were you to think of it you would possess evidence that supports that proposition – you would recall having heard on the news that Bush is president, having watched his inauguration, experience a feeling of confidence that the proposition is true, etc.. Thus, distinguishing these two kinds of knowledge can account for why we think that you know that Bush is president even though your evidence does not support this proposition – you only dispositionally know it.
S’s total evidence with regard to a hypothesis then is that knowledge S has which raises the probability of that hypothesis (the hypothesis’s probability is higher when it is conditionalized on S’s knowledge).
Williamson’s account is overly restrictive in what it requires for credible standing. Suppose that Joe has a perceptual experience of a blue book being in front of him and has good reasons to trust his perceptual faculties in this case. Presumably, and intuitively, Joe has evidence that there is a blue book in front of him. However, on Williamson’s account, this is only the case if Joe knows some propositions that support this claim. Though Joe may know a number of propositions that support his perceptual faculties being reliable, he can fail to know that he seems to see a blue book in front of him by failing to form that belief (though it is justified and true). This kind of doxastic failure seems possible, yet if it is possible, then Joe does not know that he seems to see a blue book in front of him, so his perceptual experience does not produce any evidence for him on this score. As such, Joe’s evidence would not support the proposition that there is a blue book in front of him according to Williamson’s account. This is the intuitively wrong result and shows that Williamson’s account is overly restrictive in this way.
Richard Feldman defines a person’s total possible evidence as all and only that information that is stored in that person’s mind at that time. Of this set, a person’s total evidence is that part of their total possible evidence that is available (meets some psychological accessibility constraint) and acceptable (meets some epistemic acceptability constraint). Feldman sees the evidence that passes the accessibility constraint as the evidence that S is currently thinking of (the conscious and perhaps unconscious beliefs, as well as the non-doxastic mental states that one is aware that they are in). Evidence that passes the epistemic accessibility constraint are those available items that are, or could be, justifiably believed.
Feldman’s account is not overly restrictive in requiring knowledge as Williamson’s account is. Applied to the case of Joe above, even if Joe does not form the belief that he seems to see a blue book in front of him he is nonetheless justified in so believing. As such, this is part of his evidence on Feldman’s account. Therefore, Joe’s evidence does support there being a blue book in front of him – the right result. In addition, Feldman’s account is not too lax by allowing mere beliefs to count as evidence. If mere beliefs counted as evidence (as in some coherentist theories), then propositions that one had no business believing would affect what that individual epistemically ought to believe – and this cannot be. Having the epistemic acceptability constraint set to justification avoids being too lax and too strict. Feldman’s account of evidence squares with our intuitions about what evidence a person has at a time.
An objection to this account claims that according to it one’s evidence can support a proposition even if that individual has important counter-evidence stored in his mind but is simply not thinking about it – particularly if it would be easy for him to recall these things. In such a case there is something wrong with the individual believing the proposition, but he is believing in accord with his evidence according to Feldman’s analysis.
The problem with so believing, however, is not that the individual is really failing to believe according to his evidence, but that he failed to act responsibly in forming his belief – he failed to think carefully about the matter and call to mind the relevant information stored in his mind. One can believe according to the evidence and still be blameworthy for the way that they conducted their investigation. This is such a case. Nothing here counts against Feldman’s account of evidence possession.
Another objection to this account of evidence is that according to it, there are many propositions for which we currently do not have any evidence (there is nothing we are currently entertaining that pertains to them) yet intuitively we know some of these propositions to be true. Take for example the proposition that Bush is president. Before it was mentioned, you probably had no thoughts about the matter, yet intuitively you still knew that Bush was president. Such a case seems to go against Feldman’s account of evidence, but the apparent problem can be explained away.
We can distinguish occurrent and dispositional senses of knowledge. Whereas what one occurrently knows is determined by the evidence one possesses, what one dispositionally knows is determined by what evidence that individual would possess were he to think about it. Thus, you dispositionally know that Bush is president since were you to think of it you would possess evidence that supports that proposition – you would recall having heard on the news that Bush is president, having watched his inauguration, experience a feeling of confidence that the proposition is true, etc.. Thus, distinguishing these two kinds of knowledge can account for why we think that you know that Bush is president even though your evidence does not support this proposition – you only dispositionally know it.
7.06.2007
7.05.2007
Internalism/Externalism Debate
The internalism/externalism debate in epistemology regarding justification centers on the question of what states, properties, and events can contribute the kind of justification necessary for knowledge. Internalism has been understood in a variety of ways, but roughly it is the claim that all the factors that justify beliefs are internal to the cognizer. Internalism is best construed as mentalism, the claim that the justificatory status of a person’s beliefs strongly supervenes on that person’s mental states, events, and conditions. Thus, if two people are alike mentally, then they are alike justificationally as well. Internalism has often been understood to include the claim that justifiers must be accessible to the cognizer. However, it is more straightforward and noncommittal to understand internalism as described above, with theories that include an accessibility requirement being seen as species of internalism, or one (somewhat popular) way to fill out the theory. Internalism understood as mentalism lets in all the theories that can plausibly be considered ‘internalist’. Thus, we will understand internalism as the claim that only mental factors determine justification. Foundationalism and coherentism are examples of internalist theories of justification.
Externalism is the denial of internalism. Thus, the claim is that factors that are not internal to the cognizer (factors that are not the cognizer’s mental states, properties, or conditions) make an epistemic difference in terms of justification. Extramental factors play a justificatory role. Reliabilism and proper-function theory are examples of externalist theories of justification.
Arguments in favor of internalism focus on how well internalism can handle cases of justified and unjustified belief. Imagine that Jim and George both see on the news that it is raining today. In addition to watching the weather, Jim looks outside and sees the rain falling. Internalism can explain why Jim is more justified in his belief than George is because of a mental difference – Jim has perceived the rain falling.
Imagine that Jim and George each hear a bit of testimony from Tracy. Tracy is a very reliable person, Jim knows about Tracy’s good track record, but George does not. As such, Jim is more justified in believing what Tracy says. The mental difference of memories regarding Tracy’s honesty, that Jim has and George lacks, account for the justificatory difference here.
So, internalism seems to get the cases right. Arguments for externalism largely consist in attempts to show that limiting justifiers to the mental fails to give the right result in certain cases.
Several externalists have claimed that internalists face a problem of forgotten evidence. Roughly, the idea is that one can still be justified in believing a proposition even when one has forgotten the evidence that supported that proposition; and, if so, then there must be something extramental doing some justificatory work. Goldman gives the case of Sally who reads in the NYT that broccoli is healthy. She forms that belief, but then forgets her evidential source and never comes across any further sources. Nonetheless, her belief is justified and if true is a case of knowledge.
Internalists can respond by noting that if Sally really is justified in her belief then there will be some mental factors doing the work. For instance, it is likely that Sally has a confidence or a clarity regarding the healthiness of broccoli. These phenomenal qualities are mental factors that can play a justificatory role for an internalist (contrast the support of a hazy memory). Further, Sally is likely aware of the general reliability of her memory, and that she usually does not simply believe things without having a good reason. These mental states can also provide justificatory support for her belief. If Sally lacks all such support, then it is doubtful that she truly is justified in her belief regarding the benefits of broccoli.
Externalists have responded by altering Sally’s case such that her source on the benefits of broccoli is actually an unreliable source such as the National Inquirer. Sally has forgotten the source, however, so if the internalist response above is correct, then Sally will still be justified in her belief that broccoli is healthy (provided the other mental factors mentioned above are true of her). The externalist claims that though her belief is true, it cannot be a case of knowledge, so Sally must not be justified in her broccoli belief.
This inference, however, is mistaken. It can be that Sally is justified in her true belief and yet fail to have knowledge – Gettier has shown us this. The reconstructed Sally case is indeed a Gettier case. The reasons that Sally has for thinking that broccoli is healthy provide justification, but they also contain an essential falsehood: that Sally’s reasons come from a reliable source. Thus, Sally’s case follows the recipe for Gettier cases – it is an instance of a true, justified belief that fails to be knowledge.
Some may find it strange that Sally’s belief became justified through her forgetting the source of her reasons (by forgetting it was an unreliable source). A couple things favor the internalist here, however. First, if Sally did remember the unreliable source, her belief would surely be more unreasonable. So, forgetting the unreliability of the source does seem to increase the justification. Second, if it is denied that Sally is justified in this revised case, then some distinction must be drawn to have it that she is justified in the first case, yet not in the second even though the two cases are on a par from Sally’s perspective. An account that distinguishes these cases in such a way must go contrary to stronger intuitions. There is no problem for internalism here.
Externalism is the denial of internalism. Thus, the claim is that factors that are not internal to the cognizer (factors that are not the cognizer’s mental states, properties, or conditions) make an epistemic difference in terms of justification. Extramental factors play a justificatory role. Reliabilism and proper-function theory are examples of externalist theories of justification.
Arguments in favor of internalism focus on how well internalism can handle cases of justified and unjustified belief. Imagine that Jim and George both see on the news that it is raining today. In addition to watching the weather, Jim looks outside and sees the rain falling. Internalism can explain why Jim is more justified in his belief than George is because of a mental difference – Jim has perceived the rain falling.
Imagine that Jim and George each hear a bit of testimony from Tracy. Tracy is a very reliable person, Jim knows about Tracy’s good track record, but George does not. As such, Jim is more justified in believing what Tracy says. The mental difference of memories regarding Tracy’s honesty, that Jim has and George lacks, account for the justificatory difference here.
So, internalism seems to get the cases right. Arguments for externalism largely consist in attempts to show that limiting justifiers to the mental fails to give the right result in certain cases.
Several externalists have claimed that internalists face a problem of forgotten evidence. Roughly, the idea is that one can still be justified in believing a proposition even when one has forgotten the evidence that supported that proposition; and, if so, then there must be something extramental doing some justificatory work. Goldman gives the case of Sally who reads in the NYT that broccoli is healthy. She forms that belief, but then forgets her evidential source and never comes across any further sources. Nonetheless, her belief is justified and if true is a case of knowledge.
Internalists can respond by noting that if Sally really is justified in her belief then there will be some mental factors doing the work. For instance, it is likely that Sally has a confidence or a clarity regarding the healthiness of broccoli. These phenomenal qualities are mental factors that can play a justificatory role for an internalist (contrast the support of a hazy memory). Further, Sally is likely aware of the general reliability of her memory, and that she usually does not simply believe things without having a good reason. These mental states can also provide justificatory support for her belief. If Sally lacks all such support, then it is doubtful that she truly is justified in her belief regarding the benefits of broccoli.
Externalists have responded by altering Sally’s case such that her source on the benefits of broccoli is actually an unreliable source such as the National Inquirer. Sally has forgotten the source, however, so if the internalist response above is correct, then Sally will still be justified in her belief that broccoli is healthy (provided the other mental factors mentioned above are true of her). The externalist claims that though her belief is true, it cannot be a case of knowledge, so Sally must not be justified in her broccoli belief.
This inference, however, is mistaken. It can be that Sally is justified in her true belief and yet fail to have knowledge – Gettier has shown us this. The reconstructed Sally case is indeed a Gettier case. The reasons that Sally has for thinking that broccoli is healthy provide justification, but they also contain an essential falsehood: that Sally’s reasons come from a reliable source. Thus, Sally’s case follows the recipe for Gettier cases – it is an instance of a true, justified belief that fails to be knowledge.
Some may find it strange that Sally’s belief became justified through her forgetting the source of her reasons (by forgetting it was an unreliable source). A couple things favor the internalist here, however. First, if Sally did remember the unreliable source, her belief would surely be more unreasonable. So, forgetting the unreliability of the source does seem to increase the justification. Second, if it is denied that Sally is justified in this revised case, then some distinction must be drawn to have it that she is justified in the first case, yet not in the second even though the two cases are on a par from Sally’s perspective. An account that distinguishes these cases in such a way must go contrary to stronger intuitions. There is no problem for internalism here.
7.02.2007
Luke and Philosophy
I am currently reading through Luke, and found these verses to be of interest to several philosophical issues I am interested in.
v. 13, 14: "Woe to you, Korazin! Woe to you, Bethsaida! For if the miracles that were performed in you had been performed in Tyre and Sidon, they would have repented long ago, sitting in sackcloth and aches. But it will be more bearable for Tyre and Sidon at the judgment than for you."
A couple of things this passage seems to be evidence for:
1. God has middle knowledge. It seems that God has knowledge of what human beings would have freely done in counterfactual circumstances (in situations other than the actual situation). Jesus claims to know what those in Tyre and Sidon would have done were they to witness the miracles performed in Bethsaida.
2. We are responsible for actions that we do not actually perform if we would have performed them in different circumstances. Jesus says that it will be much more bearable for Tyre and Sidon at judgment in virtue of the relevant counterfactual (had they seen they would have repented). This repentence is not something that those in Tyre and Sidon had actually done, yet they are being judged accordingly. Thus, it seems that we are held responsible for what we would have done in other circumstances. This can be understood as simply saying that we are responsible for our character, and our character is comprised of what we would do in certain circumstances (you are honest, if you would tell the truth in circumstances . . . ).
This is something that I had claimed a few years ago as a response to the problem of moral luck. These thoughts apply particularly to circumstantial luck. The problem here is that it seems a little weird that if you would have done everything that a Nazi sympathizer did if you lived during that era in Germany that you are not culpable whereas the sympathizer is simply due to having the bad luck of living then and there. My idea was that we actually are responsible for what we would have done if we lived back then and there -- this is a way of neutralizing the effect of luck on our moral appraisals. This passage makes me feel that my view is not so crazy.
v. 13, 14: "Woe to you, Korazin! Woe to you, Bethsaida! For if the miracles that were performed in you had been performed in Tyre and Sidon, they would have repented long ago, sitting in sackcloth and aches. But it will be more bearable for Tyre and Sidon at the judgment than for you."
A couple of things this passage seems to be evidence for:
1. God has middle knowledge. It seems that God has knowledge of what human beings would have freely done in counterfactual circumstances (in situations other than the actual situation). Jesus claims to know what those in Tyre and Sidon would have done were they to witness the miracles performed in Bethsaida.
2. We are responsible for actions that we do not actually perform if we would have performed them in different circumstances. Jesus says that it will be much more bearable for Tyre and Sidon at judgment in virtue of the relevant counterfactual (had they seen they would have repented). This repentence is not something that those in Tyre and Sidon had actually done, yet they are being judged accordingly. Thus, it seems that we are held responsible for what we would have done in other circumstances. This can be understood as simply saying that we are responsible for our character, and our character is comprised of what we would do in certain circumstances (you are honest, if you would tell the truth in circumstances . . . ).
This is something that I had claimed a few years ago as a response to the problem of moral luck. These thoughts apply particularly to circumstantial luck. The problem here is that it seems a little weird that if you would have done everything that a Nazi sympathizer did if you lived during that era in Germany that you are not culpable whereas the sympathizer is simply due to having the bad luck of living then and there. My idea was that we actually are responsible for what we would have done if we lived back then and there -- this is a way of neutralizing the effect of luck on our moral appraisals. This passage makes me feel that my view is not so crazy.
7.01.2007
Subscribe to:
Posts (Atom)