What does notation mean in math




















Ask Question. Asked 6 years, 10 months ago. Active 6 years, 10 months ago. Viewed 51k times. What does this mean exactly? Adriano Add a comment. Active Oldest Votes. Adriano Adriano This is such a simple explanation, I wish maths authors could explain things in layman terms for students like myself.

Thank you so much, you are an incredibly generous person for helping me with something which is probably trivial to you! Eoin Eoin 5, 1 1 gold badge 16 16 silver badges 33 33 bronze badges. Everything you have said has improved my understanding, I thank you for that.

I have only one other question. I have never seen such an intuitive definition of these terms and notations in any textbook so far although I haven't read that many either.

Is this something you developed and learned yourself, or something that you came across in your readings? These things are usually explained very early in the text. No, I did not develop this idea myself. Nor did I find it in my readings. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. Upcoming Events. Featured on Meta. Now live: A fully responsive profile.

The unofficial elections nomination post. Related 6. Hot Network Questions. Question feed. She made a notation in the margin of the book. The use of signs or symbols to represent numbers, words, phrases, or even complete concepts in fields such as language, mathematics, chemistry, and music. See also binary notation and hexadecimal notation. How a system of numbers, phrases, words or quantities is written or expressed.

Positional notation is the location and value of digits in a numbering system, such as the decimal or binary system. It works like this:. It turns out that there are actually very few tweaks that one has to make to the core of mathematical notation to make it unambiguous.

Of course, to make it really nice, there are lots of details that have to be right. One has to actually be able to type things in an efficient and easy-to-remember way.

We thought very hard about that. And we came up with some rather nice general schemes for it. One of them has to do with entering things like powers as superscripts. Well, having a clean set of principles like that is crucial to making this whole kind of thing work in practice.

But it does. And the point is that this expression is completely understandable to Mathematica , so you can evaluate it. And the thing that comes out is the same kind of object as the input, and you can edit it, pick it apart, use its pieces as input, and so on. So instead of just having things like prefix operators, we also have things like overfix operators, and so on.

And it certainly has all the various compactifying and structuring features of ordinary math notation. And the important thing is that nobody who knows ordinary math notation would be at all confused about what the expression means. Like the way trig functions are written, and so on. Well, I would argue rather strongly that the Mathematica StandardForm, as we call it, is a better and clearer version of this expression.

But if one wants to be fully compatible with traditional textbooks one needs something different. And the actual TraditionalForm I get always contains enough internal information that it can unambiguously be turned back into StandardForm.

But the TraditionalForm looks just like traditional math notation. With all the slightly crazy things that are in traditional math notation, like writing sin squared x, instead of sin x squared, and so on.

You may notice those jaws on the right-hand side of the cell. We can edit just fine. Actually, we have a few hundred rules that are heuristics for understanding traditional form expressions. And they work fairly well. Sufficiently well, in fact, that one can really go through large volumes of legacy math notation—say specified in TeX—and expect to convert it automatically to unambiguously meaningful Mathematica input. But with math there is.

Of course, there are some things with math, particularly on the output side, that are a lot trickier than text. Part of the issue is that with math one can expect to generate things automatically. But with math, you do a computation, and out comes a huge expression. So then you have to do things like figure out how to break the expression into lines elegantly, which is something we did a lot of work on in Mathematica. And that means there are nasty problems like that you can be typing more characters, but suddenly your cursor jumps backwards.

Well, that particular problem I think we solved in a particularly neat way. Did you see that? There was a funny blob that appeared just for a moment when the cursor had to move backwards. Perhaps you noticed the blob. Physiologically, I think it works by using nerve impulses that end up not in the ordinary visual cortex, but directly in the brain stem where eye motion is controlled.

So it works by making you subconsciously move your eyes to the right place. Does that mean we should turn everything Mathematica can do into math-like notation? Should we have special characters for all the various operations in Mathematica?

We could certainly make very compact notation that way. But would it be sensible? Would it be readable? One could have no special notation. Then one has Mathematica FullForm. But that gets pretty tiresome to read. The other possibility is that everything could have a special notation. Well, then one has something like APL—or parts of mathematical logic. Think about Unix.

In early versions of Unix it seemed really nice that there were just a few quick-to-type commands. But then the system started getting bigger. And after a while there were zillions of few-letter commands. And the whole thing started looking completely incomprehensible.

People can handle a modest number of special forms and special characters. Maybe a few tens of them. But not more. And if you try to give them more, particularly all at once, they just become confused and put off.

Well, one has to qualify that a bit. There are, for example, lots of relational operators. And, of course, it is in principle possible for people to learn lots of lots of different characters.

Because languages like Chinese and Japanese have thousands of ideograms. But it takes people many extra years of school to learn to read those languages, compared to ones that just use alphabets. Well, Cantor introduced a Hebrew aleph for his infinite cardinal numbers. But there are no other characters that have really gotten imported from other languages. Well, I was curious what that distribution was like for letters in math. So I had a look in MathWorld , which is a large website of mathematical information that has about 10, entries and looked at what the distribution of different letters was.

We can see that lowercase is the most common followed by , , , , etc. But what notation is good to use? Most people who actually use math notation have some feeling for that. Because anything you type will be unambiguously understandable. But for TraditionalForm, it would be good to have some principles.

Perhaps to finish off, let me talk a little about the future of mathematical notation. And with the right drawing of characters, quite a few of these could be made perfectly to fit in with other mathematical characters. Well, the most obvious possibility is notation for representing programs as well as mathematical operations. In Mathematica , for instance, there are quite a few textual operators that are used in programs.

Because we picked the ASCII characters well, one can often get special characters that are visually very similar but more elegant. And what makes all this work is that the parser for Mathematica can accept both the special character and non-special character forms of these kinds of operators.

Notice the number sign or pounds sign—or is it called octothorp—that we use for places where parameters go in a pure function. How far can one go in that direction: making visual or iconic representations of things? At least up to a point. But how far can that go? You see, I think one is running into some fundamental limitations in human linguistic processing.

When languages are more or less context free—more or less structured like trees—one can do pretty well with them. Our buffer memory of five chunks, or whatever, seems to do well at allowing us to parse them. Of course, if we have too many subsidiary clauses, even in a context free language, we tend to run out of stack space and get confused.

But what about networks? Can we understand arbitrary networks? I mean, why do we have to have operators that are just prefix, or infix, or overfix, or whatever? Why not operators that get their arguments by just pulling them in over arcs in some arbitrary network? And one question is what notation might be used to think abstractly about those kinds of things. In geometry we know how to say things with diagrams. And a little more than a hundred years ago, it became clear how to formulate geometrical questions in algebraic terms.

And my guess is that actually of all the math-like stuff out there, only a comparatively small fraction can actually be represented well with language-like notation. But we as humans really only grock easily this language-like notation. So the things that can be represented that way are the things we tend to study.

Of course, those may not be the things that happen to be relevant in nature and the universe. In the discussion after the talk, and in interactions with people at the conference, a few additional points came up. Empirical laws for mathematical notations Printed vs. I have been curious whether empirical historical laws can be found for mathematical notation.

Dana Scott suggested one possibility: a trend towards the removal of explicit parameters. As one example, in the s it was still typical for each component in a vector to be a separately-named variable. But then components started getting labelled with subscripts, as in a i. And soon thereafter—particularly through the work of Gibbs—vectors began to be treated as single objects, denoted say by or a.

With tensors things are not so straightforward. But in physics it is still often considered excessively abstract, and explicit subscripts are used instead.

With functions, there have also been some trends to reduce the mention of explicit parameters. In pure mathematics, when functions are viewed as mappings, they are often referred to just by function names like f, without explicitly mentioning any parameters. But this tends to work well only when functions have just one parameter.

With more than one parameter it is usually not clear how the flow of data associated with each parameter works. However, as early as the s, it was pointed out that one could use so-called combinators to specify such data flow, without ever explicitly having to name parameters. Combinators have not been used in mainstream mathematics, but at various times they have been somewhat popular in the theory of computation, although their popularity has been reduced through being largely incompatible with the idea of data types.

Combinators are particularly easy to set up in Mathematica —essentially by building functions with composite heads.

If one defines the integer n—effectively in unary—by Nest[s[s[k[s]][k]],k[s[k][k]],n] then addition is s[k[s]][s[k[s[k[s]]]][s[k[k]]]] , multiplication is s[k[s]][k] and power is s[k[s[s[k][k]]]][k]. No variables are required. The problem is that the actual expressions one gets are almost irreducibly obscure.



0コメント

  • 1000 / 1000