Raziaar WTF??!?! You already started this argument on this thread and we've had 8+ pages on it. http://www.halflife2.net/forums/showthread.php?p=2133920#post2133920 Yet for some reason you felt the need to infect this thread with it. Besides for anyone willing to pay attention we showed that 0.999... = 1 Wikipedia Entry

Sure it can. 0.000...1 represents the series that takes 0.01, 0.001, 0.0001 etc to infinity The one is negligable as far as calculations are concerned, but its still possible to tack a 1 (Or any number, no matter how large) on the end of an infinite number of zeros

I didn't start it on this thread, jackass. Somebody else already mentioned it, and then a couple others did, and then I replied. So kindly **** off, you and The Brick.

I'm sorry. I will never make another comment about .999 = 1 in another thread again. Look what I start! Unless I want to start a 10-page flame-fest. Actually...

What is 1/10^1? 0.1 What is 1/10^2? 0.01 What is 1/10^3? 0.001 What is 1/10^infinity? Precisely the number in question: 0.0000.....1, or zero, for short. This is called induction. After all, nobody can deny that one divided by infinity is zero. Hence, 0.0000....1 = 0 and 0.999999999..... = 1, because 1 - 0.000000.....1 = 0.999999999999..... = 1 - 0 = 1.

good lord......a battle of numbers! all i wanted to do was tell you about teh cool japanese man who likes numbers.

*Seeing myself started a .999 thread successfully* *smile schemingly* It does. You can modify something at the end of infinity, but the modified value will be exactly the same as the previous one. Actually, we have listed out dozens of methods. This is only one of them. If you can't accept it, you can choose to read other proofs. I know you are, work harder. No, it is impossible. lol

Oops Sorry about that - I sometimes read things without processing them and it's bitten me royally in the arse this time. Please accept my apologies. I will **** off now.

1+1 = 2. 1+0.999... = 1.999... Therefore 0.999... is not one! \/ Well anyway, there are no such things as infinite numbers. The number of numbers in a number is always definite, just that it may be too much for the human mind to count, obviously.

What have you done!? WHAT HAVE YOU DONE!? This means that another thread will be devoted to this over debated argument. if 0.999... == 1, then 1 + 0.999... is really 1 + 1 therefore 1 + 0.999.... is 2. Also infinity does exist, and our number systems allows it to exist. Not just 1 / infinity = 0, any number divided by infinity would equal 0, just as any number (apart from 0) divided by 0 equals infinity.

1.999... = 2 though Just as .999... = 1. If you don't believe that .999... = 1 then tell me what is the answer to this problem: 1/3 * 3 = ? Whats the answer? Well according to your logic it is not 3! Because 1/3 = 0.333... 0.333... x 3 = .999... if 0.999... != 1 then 1/3 * 3 != 1. However lets apply real logic. 1/3 + 1/3 + 1/3 = 1 = 0.333... + 0.333... + 0.333... = 0.999... As it turns out 1 = 0.999...

1/Infinity is zero, by definition, by common sense. Even .999=1 does not exist, 1/infinity is still 0. This is definition. lim{x>infinite) 1/x = 0 is definition. Don't put a "so" in the sentence. Man becomes desparate.

Actually, .999... = 1. Here's the thread: http://www.halflife2.net/forums/showthread.php?t=93360&highlight=.999

With the following I hope to put the 0.999.... = 1 matter to rest. Observe that another way to look at it is that 0.9 recurring is 9/10 + 9/100 + 9/1000 + .... and then use the formula for the sum of a geometric series. It's a really nice bit of maths, so in case you don't know it I'll quickly go through it: q: what is 1/5 + 1/25 + 1/125 + ... + 1/(5^10) ? a: well, we know it is a number (i.e. this sum is finite as there are only finitely many terms), so call it S (for "sum"). Then S/5 = 1/25 + ... + 1/(5^10) + 1/(5^11), so S-S/5 = 1/5 - 1/(5^11), and hence S = (5/4)*(1/5 - 1/(5^11), which after simplifying becomes (1-1/(5^10))/4. There is a formula for this, but the instructive thing is to remember how to derive it, as above. It's not hard at all. The infinite case is a bit trickier. How do we know that 1/5 + 1/25 + 1/125 + ... is actually a number, i.e. that it isn't infinite? This might seem unimportant, but consider instead the following situation: S=1+1/2+1/3+1/4+1/5+... (this is called the harmonic series, and is very interesting). It turns out that this tends to infinity, so it is a logical fallacy to say "call this number S" and then use it as if it were a real number. You can get into all kinds of problems (and people did) by making this mistake. It turns out that geometric series (i.e. those in which each term is a fixed multiple of the term before) always converge which the ratio is <1. The way to see this is to look at the first N terms, and use what we did above. So 1/5 + ... + 1/(5^N) = (1-1/(5^N))/4 as above, and this always less than 1/4, so 1/5 + 1/25 + ... doesn't tend to infinity. Exactly the same thing works for a+ar+ar^2+ar^3+... always converges when -1<r<1 (in the negative case you have to check that the sum doesn't tend to minus infinity either). Now we know that, we can do the same trick as above: S=9/10+9/100+... so S/10=9/100+9/1000+... and hence (1-1/10)S=9/10, i.e. S=1. I remember quite well being confused by this when my teacher mentioned it first. Here's a question to ask your self: what is 1-0.9999... ? It is clearly not negative. And it's less than 1/10, as 0.9999... is greater than 0.9. And it's less that 1/100, as 0.9999... is greater than 0.99 = 1-1/100. In fact, for any positive integer n you care to name, it is less than 1/(10^n), as 0.9999... is greater than 0.(n 9s). What non-negative number is less than 1/(10^n) for all n? Well, it can only be 0. The maths of the real numbers (i.e. anything with a decimal expansion, so that includes integers, rational numbers (p/q), solutions of equations (root 2, sqrt(1+sqrt(2)), and even numbers that aren't roots of (polynomial) equations (pi, e, uncountably many others)) is very interesting. It turns out that this concrete construction, via infinite decimals, is not the most useful one. It makes it hard to prove things. There are three other characterisations of the real numbers, which are all equivalent. (a)any increasing sequence which is bounded above (i.e. doesn't tend to infinity) tends to a limit. (monotone sequences axiom (monotone means strictly increasing or strictly decreasing)) (b)any non-empty set of real number which is bounded above has a least upper bound. (least upper bound axiom) (c)an infinite number of points in a interval of finite length must have a subsequence which tends to a limit. (Bolzano-Weierstrass axiom) (strictly, the interval must contain it's end-points) So to prove things in the real numbers, you choose on of the above axioms and use that and all the facts you know about the rationals (you're working in the smallest field containing the rationals such that your axiom is true). It is something you have to learn in the first year of a maths degree to prove that each axiom is equivalent. That would take too long for me to explain now, but it is worth seeing why these don't work in the rationals. The thing that's hard to grasp the first time you see this is that when we say "tends to a limit" we mean "there is a point in the field we are considering which this sequences tends to". (a)take succeedingly better approximations for pi. so 0,1,2,3,3.1,3.14,3.141,etc... This is increasing, bounded above (by 4, say), but if it did tend to a limit then that limit would have to be pi, and pi isn't a rational number. (b)just take the set of all points I outlined above. (c)again, the set above works, as it is contained in [0,4].

What are real numbers? Real numbers are the numbers used to measure lengths. (They were essentially invented for this purpose, hence this is the best way to understand them, although later it turns they can also be used to measure other quantities such as areas.) Imagine an ideal line, infinitely long in both directions, straight, and continuous without breaks or gaps. Fix a point to begin at, called 0 (zero), and fix another point to be called 1 (one), which defiens a choice of "unit length". Then there should be exactly one real number for every point on this line, such that the number measures how far that point is from the point 0, assuming the point 1 is one unit away. Positive numbers correspond to points on the same side of 0 as 1, and negative numbers correspond to those points on the opposite side of 0 from the point 1. Then how do we represent real numbers by symbols? And how do we add and multiply these numbers using those symbols? Possibly the best way is using decimals. A finite decimal is a finite sequence of form a1a2a3....an.b1b2....bm, where each ai and each bj is one of the ten digits {0,1,2,3,....9}. A finite decimal corresponds to a point on the real line as follows. For example, 14.63 corresponds to the point constructed like this: first lay off 14 copies of the unit length, the first one being at 1, the second one (called 2) being one unit on the opposite side of 1 from 0, and the third one (called 3) on the opposite side of 2 from 1, and so on, until we come to the 14 th point (called 14). Then lay off another unit ending at 15. Then subdivide the interval between 14 and 15 into ten equal parts, with the end points of the 6th subinterval being called 14.6 and 14.7. Then subdivide that 6th subinterval again into ten equal parts and go out to the 3rd subinterval. The initial point of that subinterval is the point corresponding to 14.63. In this way one can assign to any finite decimal a point on the real line. Not every point of the real line occurs as one of the points corresponding in this way to finite decimals however. For instance the point (called 1/3) lying one third of the way between 0 and 1 does not correspond to a finite decimal. It lies to the right of the all points corresponding to finite decimals of form { .3, .33, .333, .3333, .33333, ..........}, but to the left of any point of form { .4, .34, .334, .3334, .33334, ......}. However since the points of form { .3, .33, .333, .3333, .33333, ..........} get arbitrarily close to the point 1/3, any point to the left of 1/3 will lie to the left of one ofthe points { .3, .33, .333, .3333, .33333, ..........}. For example if we take a point which is 1/1000 to the left of 1/3, then it will be to the left of the point .3333, which is within 1/10,000 of 1/3. Thus 1/3 is “the leftmost point which is not to the left of any finite decimal of form { .3, .33, .333, .3333, .33333, ..........}”, i.e. 1/3 is the “smallest number not smaller than any of the numbers { .3, .33, .333, .3333, .33333, ..........}”, technically we say 1/3 is the “least upper bound (l.u.b.) of the numbers { .3, .33, .333, .3333, .33333, ..........}”. Although 1/3 does not equal any one of these finite decimals, this is a description of the point 1/3 in terms of the whole infinite sequence { .3, .33, .333, .3333, .33333, ..........} of finite decimals. It is usual to replace the infinite sequence { .3, .33, .333, .3333, .33333, ..........} of finite decimals simply by the one infinite decimal .3333333........ (3’s continuing forever), sometimes denoted by .3333?3.... where the bar over the last 3 indicates infinite repetition of that symbol. In this way every point of the real line can be described by either a finite decimal or an infinite decimal. I.e. given a point x on the line, to the right of 0 for example, to get the integer part of the decimal measure off copies of unit interval starting at 0, until the next unit interval will go past the point x. If x lies strictly between the 5th and the 6th point, for instance, then the integer part of the decimal for x is 5. Then subdivide that interval again into ten equal parts and see whether x lies exactly on one of the subdivision points. If it does lie on say the 2nd subdivision point, then x corresponds to the finite decimal 5.2. if x does not lie on one of the subdivision points but lies between say the 2nd and the third subdivisions points, then the second decimal approximation to x is 5.2. Continue in this way to subdivide and approximate x by decimals. If eventually x lies exactly on some subdivision point then x corresponds to a finite decimal. if x never lies on any subdivision point, as was the case with 1/3, then x corresponds to an infinite decimal. Thus each point of the line can be represented by a finite or infinite decimal. We often call the finite ones infinite decimals also, where we assume they are made to look infinite by writing an infinite number of zeroes after they stop. This makes the language easier and we can just say “every point of the real line corresponds to an infinite decimal”. (Not all infinite decimals can be obtained in this way from points on the line. Try to convince yourself that this procedure will never lead to an infinite decimal ending in all 9's repeating forever.) The other direction is harder, i.e. if we start with an infinite decimal, does it always correspond to a point of the real line? We could try to find the point, starting from the decimal as follows. If we have a finite decimal like 3.7 there is no problem, it is easy to find the corresponding point. Just go out to the fourth unit interval after 0, between the points 3 and 4, subdivide into ten equal parts and take the 7th subdivision point to be 3.7. But if the decimal is infinite, it is not so obvious. Say we have the decimal D = .12122122212222......... Does this correspond to a point x? Well first we subdivide the interval between 0 and 1 into ten equal parts and we consider the first subdivision point called .1. Then we know x lies to the right of .1. then we subdivide again and take the 2nd subdivision point in the subinterval, the point 1.2, and we know x lies to the right of that point. Continuing in this way we find an infinite number of points (if we live long enough, otherwise we must imagine it) and we know the point corresponding to x should lie to the right of all of them. But it should also be the closest point which is to the right of all of them., So we describe the point x corresponding to an infinite decimal D as “the leftmost point which is to the right of all points corresponding to finite decimal approximations of D”, i.e. x is the lub of all finite decimal approximations to D”. But how do we know there is such a point? We do not. But it seems plausible at least if the real line is truly supposed not to have any holes in it, so we take this as an axiom, or unproved fact about the real line. This is called the “least upper bound axiom”: For every infinite decimal, the sequence of finite decimal approximations has a least upper bound on the real line. Stated as fact about real numbers, it is usual to assume it in the following more general form: Least upper bound axiom: “If a set of real numbers is non empty and has an upper bound, then it has a least upper bound”. This concept can be used to describe many familiar numbers and solutions to many problems: Examples: (i) (assuming we know how to find the length of line segments and hence the perimeter of a polygon), the number <pi> can be described as the lub of the lengths of all polygons inscribed in the unit semi circle. I.e. if you inscribe any polygon in the unit semi circle, the perimeter of that polygon will not be greater than <pi>, but if you take a polygon with small enough sides, its perimeter will be as close as you like to the number <pi>, i.e. <pi> is the smallest number not smaller than any of those perimeters. But how can we calculate this number, i.e. how can we find some of its finite decimal approximations? (ii) If we know how to find the area of a triangle and hence of a polygon, we can define the area of a circle as the lub of the areas of all inscribed polygons. But how can we show that this area is actually equal to <pi>r^2, where <pi> is defined above and r is the radius of the circle? (iii) If we want to know what is meant by the value of an infinite sum like 1 + 1/2 + 1/4 + 1/8 + 1/16 + ........, we can say it is the lub of all the finite “partial” sums { 1, 1 + 1/2, 1 + 1/2 + 1/4, 1 + 1/2 + 1/4 + 1/8,.......}. But how can we actually calculate this sum, i.e. can we find this least upper bound? (iv) If we want to find the slope of the parabola y = x^2 at the point (1,1), we can say it is the lub of the slopes of all the secant lines drawn through points of the form (x,x2) and (1,1) where x < 1. But can we actually calculate this slope? (v) If we want to describe the “square root of 2” we can say it is the lub of all finite decimals whose square is less than 2. (Since the square of a finite decimal is never 2, as you can easily check, the square root of 2 is going to be an infinite decimal, and it is not so easy to even tell how to square an infinite decimal. In fact the only way we have to do that, is to say that the square of an infinite decimal is the lub of the squares of all its finite decimal approximations!) Can we compute, or at least approximate this infinite decimal? (vi) The cosine function, in radians, is defined as follows: given a positive real number t, measure off an arc of length t along the unit circle, starting at (1,0) going counterclockwise. Then the x coordinate of the point reached is cos(t), and the y coordinate is sin(t). But can we actually calculate say cos(1)? All these problems have answers provided by calculus. For example, cos(t) is given by the infinite formula cos(t) = 1 - x^2/2! + x^4/4! - x^6/6! ?..., where n! = “n factorial” = (1)(2)(3).... is the product of the numbers between 1 and n. Cos(t) can be computed to any desired degree of accuracy by taking enough terms of this formula. For example, cos(1) is the least upper bound of the sequence of approximations {1-1/2, 1 -1/2 + 1/24 - 1/720, ........} formed as above by taking finite partial sums ending in a negative term. Actually computing answers to problems It is one thing to describe the answer to a problem as a lub of some set of numbers, but it is usually more desirable to actually find the answer in a nice simple form, or at least approximate it as well as we want. This is often not so easy, and may depend on the problem at hand. Thus there are two parts to solving most problems: 1) Describe the solution in precise terms, even if abstract ones. 2) Actually calculate that answer, say as a decimal, or at least show how to find as good a finite decimal approximation as we want. Sometimes we calculate the answer in terms of some other “known” number, such as when we say the area of a circle is <pi>r^2, even if we may not know exactly how to calculate <pi>. Even step 1) above has two parts: 1a) decide whether the problem has a solution, and if so, 1b) describe it. For example, if the solution of a problem is defined as the lub of some set of real numbers, to show it exists all we have to do by the lub axiom is prove the set is non empty and has some upper bound. For example, to prove the infinite sum 1 + 1/2 + 1/4 + 1/8 +....... has a finite value, described as the lub of all the finite sums }1, 1 + 1/2, 1 + 1/2 + 1/4, ....} we must show there is an upper bound to these finite sums. But it is not hard to see these finite sums are never greater than 2, so 2 is an upper bound. Then the axiom tells us there is a least upper bound, which in fact turns out also to be 2. The finite partial sums of the sequence 1 - 1/3 +1/5 - 1/7 + 1/9 - 1/11 ?....... are bounded above by 1, hence have a least upper bound, WAIT!! OOOOPS! The sum of this sequence is not the lub of all those finite partial sums since the minus signs cause the finite sums to go back and forth on both sides of the actual infinite sum. (Now is when we need the more general notion of “limit” instead of lub.) Anyway we can finesse this and say (correctly) the value of the infinite sum 1 - 1/3 +1/5 - 1/7 + 1/9 - 1/11 ?....... is the lub of the finite partial sums {1 - 1/3, 1 - 1/3 +1/5 - 1/7, 1 - 1/3 +1/5 - 1/7 + 1/9 - 1/11, .........}. I.e. if we are careful to always take partial sums which end in a negative term then they are actually smaller than the infinite sum we are trying to define. Thus we can say that 1 is an upper bound for THESE finite sums so there is a lub. But what is the lub ? It turns out to be <pi>/4, rather amazing. In the case of the infinite sum 1 + 1/4 + 1/9 + 1/16 + 1/25 + ......, where the nth denominator is the square of the integer n, it is not even so easy to find any upper bound at all (until you know about how to compute area formulas by integral calculus). The least upper bound of these finite sums turns out to be <pi>^2/6, incredibly. Not only that, Leonhard Euler knew this before the invention of calculus!! Euler also knew how to evaluate the sum 1 + 1/16 + 1/81 + 1/ 243 + ....., where the nth denominator is the 4th power of the integer n, namely <pi>^4/90, and he knew many more such even power sums and included them as essential material in his famous “PRECALCULUS” book! However I do not believe even today that anyone knows the value of 1 + 1/8 + 1/27 + 1/64 +..... where the nth denominator is the cube (or any other odd power) of n. I.e. these finite sums have an upper bound, but no one knows the least upper bound. Differential calculus is about how to: 1) describe the answer to the slope problem for the graph of a function in terms of "limits", and 2) how to actually calculate these limits to calculate the slope of y = f(x) at least as well as we know how to calculate f(x) itself. Thus for a nice easy function like a polynomial f(x) = 3x^2-6x+9, we should be able to calculate the slope also as a polynomial. but for a trigonometric function like f(x) = cos(x) we will only be able to calculate the slope function as another trigonometric function. (In a later math course, when we know the infinite formula given above for cosine, we will also get an infinite formula for the slope of the graph of cosine.) For a more difficult function like 2^x, or log2(x) (the logarithm “base 2” of x), the derivative will be also a challenge. You have probably heard of "natural logarithms", or logarithms to the base "e". We will define this magic number "e" as the unique base such that the slope of the graph of y = e^x at the point (0,1) equals 1. But then what is the number e? calculus can be used to give a very simple formula for the function e^x = 1 + x + x^2/2! + x^3/3! + x^4/4! +......., and this can be used to approximate e very well, by plugging in x = 1 and adding up a few terms. It turns out e is between 2.71828 and 2.71829. Rather than continuing to restrict ourselves to the concept of least upper bounds, it is more useful to use the concept of “limits”. These are harder to define precisely, and harder to prove the existence of, but easier to deal with intuitively. Thus in practice we will find it convenient to use this concept, since there are some good methods for actually computing these “limits”, using the notion of a “continuous function”. This is our next topic of study. For example, if we approximate the tangent line to y = x^2 at (1,1), by the secant line through the points (1,1) and (x,x^2), where x < 1, we can describe the slope of the tangent line as the lub of the slopes of all these secant lines, i.e. the lub of all numbers of form (x^2-1)/(x-1) where x < 1. Simplifying the fraction gives x+1, and if x is any number < 1, the smallest number not smaller than any of the numbers x+1, is 2.

0.999... equals one, but only theoretically. In real life, you could never have an infinite number of 9's (so you could never get 0.999... to equal one), but theoretically, you could. /thread

That's how I see it. In reality, you have one object, then another. They aren't the same, thus..they aren't. It's like holding an apple and looking at an apple tree and saying they are the same thing. THEORETICALLY they are...an apple has seeds in it which creates a tree which creates apples etc etc etc.... but they aren't the same in reality. I don't know, I see how .999 = 1 on paper, but looking at the 2, they aren't the same....

looking at a football, you can call it a football, or a leather sphere. Both are the same. One object can have two distinct names. Like a wooden box and a crate.

Here's the deal: The limit of 1 - 1/10^n as n goes to infinity is 1. In other words, when n = 1 we have 0.9, when n = 2 we have 0.99 and so on. You might deduce that when n = infinity, 0.9999999999.... = 1. That's incorrect. "As n goes to infinity" does not mean "when n reaches infinity," because that's impossible; you cannot reach infinity. n cannot be infinity, for infinity is not a number. I hope this clears things up once and for all. No, 0.9 with the 9 repeating DOES NOT equal 1, but the limit of that sequence as the number of 9s reaches infinity is 1. If you could somehow reach infinity then yes, it is one, but that's an absurdity so do yourself a favor and come back to the real world.