IFRAME SYNC
August 2020

I recently decided to introduce myself to the field of Modern Algebra - in particular, Galois theory - and I found it absolutely beautiful! Thus I would really like to study something in Galois theory, which leads me to ask - do people still develop Galois theory? What else is there to learn in the subject?

I am inspired by questions like these: What kind of work do modern day algebraists do? and What do modern-day analysts actually do? and would love to learn your opinions, stories, etc!

Thanks in advance!



from Hot Weekly Questions - Mathematics Stack Exchange
Gauss

Hello all,

Recently, I have been studying neural networks and I found a few interesting papers that used NNs to approximate solutions to PDEs. There are a few benefits and drawbacks to using NNs to do this as opposed to conventional numerical methods, but I think the idea itself is fascinating. Are there any other applications of NNs to mathematics that you know of? I would think that there would be a lot of potential for their use in numerical analysis and optimization and maybe in automated theorem proving, but from what I've seen, this is not so common. Also, are there any inherent constraints associated with using NNs in mathematics? I feel like the black-box nature of a lot of deep learning models may hinder the ability to find theoretical error bounds and conditions to guarantee convergence, which may explain why they aren't more prevalent in mathematics.

submitted by /u/Waelcome
[link] [comments]

from math https://ift.tt/34Q9PtB https://ift.tt/eA8V8J

In any regular calculus or real analysis course, we learn the definition of the derivative of a function $f(x)$ as $$f^\prime (x)=\lim\limits_{h\rightarrow 0} \frac{f(x+h)-f(x)}{h}$$ However while studying abstract algebra we come to know that differentiation is just like any operation (like addition, multiplication etc.) but on functions. So I want to know that is there a way to define an algebraic structure with the underlying set as the set of all differentiable functions and the operation of differentiation.
And also if it's possible to define differentiation in such a manner, how to connect it with the analytical definition of differentiation.



from Hot Weekly Questions - Mathematics Stack Exchange
Soham Sarkar

For all, $n\geq 1$, prove that

$$\int_0^1 x^{2n-1}\ln(1+x)\,dx =\frac{H_{2n}-H_n}{2n}\tag1$$ This identity I came across to know from here, YouTube which is proved in elementary way.

Trying to make alternate efforts to prove it, we can observe that $$\ln(1+x)=\ln\left(\frac{1-x^2}{1-x}\right)=\ln(1-x^2)-\ln(1-x)$$On multiplying by $x^{2n-1}$ and integrating from $0$ to $1$ , we have first order partial derivatives of beta function.

How can we prove that identity $(1)$ without the use of derivatives of beta function?



from Hot Weekly Questions - Mathematics Stack Exchange
Naren

I really, really don't like the use of the phrase "true but unprovable" in relation to Gödel's First Incompleteness Theorem, and I was interested to know if anyone has thoughts on this.

The crux of my thinking is this: I believe that when most mathematicians who aren't working in foundations use the word "true", they really mean "provable". If they're working in higher-order settings, they might mean "tautological" (i.e. true in all models under all interpretations), since higher-order logics don't have a completeness theorem that makes "tautological" and "provable" equivalent.

It seems that "true but unprovable" is used as a catchphrase to capture what Gödel I is all about, but I think that without pinning down precisely what is meant by "true", it's very easy to mislead: the Gödel sentence of a theory is not provable (that's the whole point); it is true in the intended model of arithmetic, but this can't be shown unless you switch to a more powerful system (which will have it's own Gödel sentence).

The Gödel sentence of PA has the nice property of being "obviously true" in the intended model as an immediate consequence of Gödel I, but most logically independent sentences are nowhere near as clearcut: just look at Choice. Despite the importance of Gödel I, it seems to have had precisely zero impact on how most mathematicians work: we still like proofs, despite their inherent limitations.

Perhaps I'm splitting hairs, but I think that this is especially important now that there is more of a push (e.g. Buzzard) to start using proof assistants in mathematics: if we end up embracing formal proof in an axiom system as being the gold standard, surely "true" is going to become completely synonymous with "provable" even if it already isn't?

submitted by /u/Course-6
[link] [comments]

from math https://ift.tt/2YQfyeR https://ift.tt/eA8V8J

In this weekly thread, we discuss essays from the joint AMS and MAA publication Living Proof: Stories of Resilience Along the Mathematical Journey. To quote the preface:

This project grew out of conversations with students about the difficulties inherent in the study of mathematics ... Math should be difficult, as should any worthwhile endeavor. But it should not be crippling. The ability to succeed in a mathematical program should not be hindered by a person’s gender, race, sexuality, upbringing, culture, socio-economic status, educational background, or any other attribute.

... As you read this, we hope that you will find some inspiration and common ground in these pages. We trust that there is at least one story here that you can connect with. For those stories that you cannot relate to, we hope that you will come to better appreciate the diversity of our mathematical community and the challenges that others have faced. We also hope that you will laugh with some of our authors as they recount some of the more absurd struggles they have faced. In the end, we hope that you are motivated to share your own stories as you learn more about the experiences of the people in your own mathematical lives.


We will read and discuss individual essays from Part II: Who Are These People? Do I Even Belong? The essays can be found here.

This week's essay starts on page 40 and is titled:

  • 13. Cold, Austere, or Queer, by Autumn Kent

Please take the time to read and reflect on this story, and feel free to share how it relates to your own experiences in the comments below!

submitted by /u/inherentlyawesome
[link] [comments]

from math https://ift.tt/3gIrEgj https://ift.tt/eA8V8J

This recurring thread will be for general discussion on whatever math-related topics you have been or will be working on over the week/weekend. This can be anything from math-related arts and crafts, what you've been learning in class, books/papers you're reading, to preparing for a conference. All types and levels of mathematics are welcomed!

submitted by /u/AutoModerator
[link] [comments]

from math https://ift.tt/31JhnMJ https://ift.tt/eA8V8J

      Dubbed the "unofficial poet laureate of Twitter" Brian Bilston (a pen-name) sometimes uses mathematics to shape his poems; for example, this poem whose word-counts follow the Fibonacci numbers:

from "Brian Bilston's  POETRY LABOETRY"

Look for Brian on Twitter (@brian_bilston) 
and also on Facebook (www.facebook.com/BrianBilston/).
Links and comments for many of my blog-posts also on twitter on Twitter  @mathypoems
and a search using the hash-tag #NPRpoetry leads to lots of interesting stuff!



from Intersections -- Poetry with Mathematics
https://ift.tt/3ju7NDf
noreply@blogger.com (JoAnne Growney)

Mathematical proofs are written as sentences and not as collections of logic symbols.

Through logical operations, it is much easier for me to visualize what the symbols are trying to tell us rather than English text filled with grammar. This is my personal opinion, others may have different opinions.

I just asked this question on another website to find out logical mistakes in my work which is entirely done in the language of propositional logic.

Some people suggested to write it down in sentences in English. Is there any kind of tragedy in writing proofs as collections of logic symbols?



from Hot Weekly Questions - Mathematics Stack Exchange
lorilori

I want to use Elliptic curves modulo $n$ to find out if $n$ is prime or not.

Now, I know that $n = 53467 = 127\times 421$, but how do I find this out using Elliptic curves?

I tried factorization using Lenstra factorization, but how to choose an efficient Elliptic curve to get the factors fast?



from Hot Weekly Questions - Mathematics Stack Exchange
Shinimani

I am a Software Engineering student and this year I learned about how CPUs work, it turns out that electronic engineers and I also see it a lot in my field, we do use derivatives with discontinuous functions. For instance in order to calculate the optimal amount of ripple adders so as to minimise the execution time of the addition process:

$$\text{ExecutionTime}(n, k) = \Delta(4k+\frac{2n}{k}-4)$$ $$\frac{d\,\text{ExecutionTime}(n, k)}{dk}=4\Delta-\frac{2n\Delta}{k^2}=0$$ $$k= \sqrt{\frac{n}{2}}$$

where $n$ is the number of bits in the numbers to add, $k$ is the amount of adders in ripple and $\Delta$ is the "delta gate" (the time that takes to a gate to operate).

Clearly you can see that the execution time function is not continuous at all because $k$ is a natural number and so is $n$. This is driving me crazy because on the one hand I understand that I can analyse the function as a continuous one and get results in that way, and indeed I think that's what we do ("I think", that's why I am asking), but my intuition and knowledge about mathematical analysis tells me that this is completely wrong, because the truth is that the function is not continuous and will never be and because of that, the derivative with respect to $k$ or $n$ does not exist because there is no rate of change.

If someone could explain me if my first guess is correct or not and why, I'd appreciate it a lot, thanks for reading and helping!



from Hot Weekly Questions - Mathematics Stack Exchange
Santiago Pardal

I am currently finishing up my degree in mathematics with the intent of pursuing logic (either model theory or set theory) in postgrad. I am taking 4 papers and am feeling overwhelmed and so I am considering dropping a paper so I can put more time and focus in the other 3. I don't like my multi-variable calculus paper and it doesn't seem as relevant to my interests compared with the other papers but I feel like its something you just need to know as a mathematician. I have done calc 1 and 2 and this is a 2nd year paper as my other 3 are 3rd year papers. Any advice would be greatly appreciated.

Oh and I also never got round to doing 2nd year differential equations so I will have to do it next trimester but it would take the place of some cooler 3rd year papers I would rather do. Applying the same question, could I skip it or should I just grind my teeth and do it?

submitted by /u/daseinp
[link] [comments]

from math https://ift.tt/31HQj0f https://ift.tt/eA8V8J

I am taking a course in Discrete Mathematics. In the course we are using $\to$ for implication and have been discussing truth tables and the like. But something was said about this being the same as $\implies$. It seemed strange to me that if they are the same, why not just use one of the symbols. I dug around and find that there is a difference.

I know that in the day to day life of a mathematician, whatever difference there is, it isn't really much to worry about. But there is supposedly a difference. I know that there are a number of other questions/answers on this site that discuss this, but I am still a bit confused. Here is my current understanding. Please tell me if I am thinking about it the right way

First my understanding:


A proposition is the same as a statement.

When $A$ and $B$ are propositions, then $A \to B$ is the proposition with the truth table that is only false when $p$ is true and $q$ is false.

When proving a theorem something is assumed to be true. From this one makes arguments that lead to the conclusios. We then use $A \implies B$ to say that since we know $A$ is indeed true, then $B$ must also be true. To $\implies$ is not a strict logical symbol with a truth table. We only use this to say that something is true because of something else.

If I know that $x$ is equal to $1$ and I want to say that from this follows that $x^2 = 1$, then I would use $\implies$. So I may say "We know that $x=1 \implies x^2 = 1$".

So far so good.

Let's say I want to define a set. If I consider the two sets $$ A = \{x\in \mathbb{R}: x^2 =1 \to x\geq 0\} \\ B = \{x\in \mathbb{R}: x^2 =1 \implies x\geq 0\} $$

Here then $A = \mathbb{R}\setminus \{-1\}$ because for these numbers the proposition/statement $(x^2 =1 \to x\geq 0)$ is true.

And $\implies$ in $B$ doesn't make sense because I am not asserting anything. This would be the same reason that if I make the theorem that: for all real numbers $x$, $x^2 = 1 \implies x = 1$, then this is an incorrect theorem.

If I make the definition saying that a real number $x$ is foo if $x^2 = 1 \implies x =1$, then the only number that is foo is $1$.

Is all this correct?


I understand that mathematicians use $\implies$ when maybe they "should" use $\to$ and this doesn't bother me. I am just trying to understand.

(You should have a "did-I-understand-this-correctly tag.)



from Hot Weekly Questions - Mathematics Stack Exchange
John Doe

Let $x, y$ be a positive integers. I want to know when $3 x^2 + 2 x = y^2$ has a solution.

Through some enumeration of all $x$, and trial and error, I have found the following recursion which appears to include all the solutions:

Initial conditions are:

$$\begin{array}{l} x_0 = 0, x_1 = 2\\ y_0 = 0, y_1 = 4 \end{array}$$

Recursion is:

$$\begin{array}{l} x_n = 8 y_{n - 1} + x_{n - 2}\\ y_n = 14 y_{n - 1} - y_{n - 2} \end{array}$$

This appears to be similar to Pell's equation, and here it seems that $x / y$ is some continued fraction approximation to $1 / \sqrt{3}$.

I'm not quite sure how to find all solutions mathematically though, and see that this indeed produces all solutions.



from Hot Weekly Questions - Mathematics Stack Exchange
simonzack

Is there a cubic $Q(x)\in \mathbb{Z}[x]$ so that $|Q(p_1)|=|Q(p_2)|=|Q(p_3)|=|Q(p_4)|=3$, where $p_1, p_2, p_3, p_4$ are distinct primes?

Clearly there must be at least one $Q(p_i)=3$ and at least one $Q(p_j)=-3$ (otherwise there will be 4 roots of a third degree polynomial)

Lets suppose that $Q(p_1) = 3$ and $Q(p_2) = -3$.

$Q(p_1) - Q(p_2)/ (p_1-p_2) = n$ where $n \in \mathbb{Z}$

The dividers of $6$ are $1, 2, 3, 6$. $(p_1-p_2) \in \{1, 2, 3, 6\}$

That’s what I’ve got so far.



from Hot Weekly Questions - Mathematics Stack Exchange
Foorgy Infifcio

A magnium is a set M with a binary operation $\cdot$ satisfying:

  1. $|M| \ge 2$
  2. For all $a$, $b$, $c$ $\in M$, $(a \cdot b) \cdot c = a \cdot (b \cdot c)$.
  3. For all $a$, $b$ $\in M$ with $a \ne b$, exactly one of the equations $a \cdot x = b$ and $b \cdot x = a$ has a solution for $x$ in $M$.
  4. For all $a$, $b$ $\in M$, the equation $a \cdot x = b$ has a solution for $x$ in $M$ if and only if the equation $y \cdot a = b$ has a solution for $y$ in $M$.

Examples of magniums are the positive real numbers and the non-negative integers under addition. Another example is the set $\{1, 2, 3, ..., 120\}$ under the operation $x \cdot y = \min\{x + y, 120\}$, which shows that magniums generally do not have the cancellation property.

So the question is, is there a non-commutative magnium? Currently I'm trying to think of some two-valued function $f(x, y)$ on $\Bbb{R}$ satisfying $f(x, y) \ge \max\{x, y\}$ that's associative but not commutative, and I'm not coming up with anything good.



from Hot Weekly Questions - Mathematics Stack Exchange
Perry Ainsworth

In the last week of August, I attended for the first time a virtual conference. This was the 2020 Ural Workshop on Group Theory and Combinatorics, organised by Natalia Maslova at the Ural Federal University in Yekaterinburg and her colleagues. The conference was held as a Zoom meeting, and ran with only one hitch. As fate would have it, it was Natalia’s talk that was disrupted by a technological failure, so she started ten minutes late and had to talk fast. My co-author and St Andrews student Liam Stott was talking in the other parallel session immediately afterwards, so I switched as quickly as I could, only to find that the chair of that session had started him early (I assume the previous speaker hadn’t shown up), and he was three-quarters of the way through his talk already. Fortunately I knew what he was talking about!

Yekaterinburg is four hours ahead of St Andrews, so we had a week of very early rising; we had lunch at 10am, and were finished for the day (in both senses) by 2pm most days.

There were some very enjoyable talks, and as usual I can only mention a few. Cheryl Praeger talked about totally 2-closed finite groups. A permutation group G is 2-closed if every permutation preserving all the G-orbits on ordered pairs belongs to G. Cheryl and her colleagues call an abstract group 2-closed if every faithful transitive permutation representation of it is 2-closed. These groups were first studied by Abdollahi and Arezoomand, who found all nilpotent examples; with Tracey they subsequently found all soluble examples. Now this team augmented by Cheryl has considered insoluble groups. At first they found none, but they found that in fact six of the 26 sporadic simple groups (the first, third and fourth Janko groups, Lyons group, Thompson group, and Monster) are totally 2-closed. Work continues.

We had a couple of plenary talks about axial algebras; Sergey Shpectorov and Alexey Staroletov explained what these things (generalised from the Griess algebra for the Monster) are, and what the current status of their study is.

Greenberg’s Theorem states that any finite or countable group can be realised as the automorphism group of a Riemann surface, compact if and only if the group is finite. Gareth Jones talked about this. The proof, he says, is very complicated. He gave a new and much simpler proof; it did less than Greenberg’s Theorem in that it only works for finitely generated groups, but more in that the Riemann surface constructed is a complex algebraic curve over an algebraic number field.

Misha Volkov gave a beautiful talk about synchronizing automata. He began with the basic stuff around the Černý conjecture, which I have discussed before, but added a couple of things which were new to me: a YouTube video of a finite automaton taking randomly oriented plastic bottles on a conveyor belt in a factory and turning them upright; and the historical fact that the polynomial-time algorithm for testing synchronization was in the PhD thesis of Chinese mathematician Chung Laung Liu (also transliterated as Jiong Lang Liu), two years before the Černý conjecture was announced. Then he turned to new results, and showed that, with only tiny changes (allowing the automaton to have no transition for some state-symbol pair, or restricting the inputs from arbitrary words to words in a regular language) the synchronization problem can jump up from polynomial to PSPACE-complete!

Alexander Perepechko gave a remarkable talk, connecting the Thompson group T, the Farey series, automorphism groups of some affine algebraic surfaces, and Markov triples, solutions in natural numbers to the Diophantine equation x2+y2+z2 = 3xyz. (There is a long-standing conjecture that a natural number occurs at most once as the greatest element in some such triple. The sequence of such numbers begins 1, 2, 5, 13, 29, 34, 89, …. I will not attempt to explain further.)

Rosemary became the fourth author of the “diagonal structures” quartet to talk about that work, which I discussed here. She concentrated on the heart of the proof, the first place in the work where the remarkable appearance of algebraic structure (a group) from combinatorial (a Latin cube with a mild extra hypothesis) appears. Without actually describing how the hard proof goes, she explained the context and ideas clearly. I think this ranks among my best work; and all I did, apart from the induction proof of which Latin cubes form the base, was to insist to my co-authors that a result like this might just be possible, and we should go after it.

One of my early heroes in group theory was Helmut Wielandt; his book on permutation groups was my first reading as a graduate student. Danila Revin gave us a Wielandt-inspired talk. Wielandt had asked, in Tübingen lectures in the winter of 1963-4, about maximal X-subgroups of a group G, where X is a complete class of finite groups (closed under subgroups, quotients and extensions). Let kX(G) be the number of conjugacy classes of maximal X-subgroups of G, Wielandt said that the reduction X-theorem holds for the pair (G,N) if kX(G/N) =  kX(G), and holds for a group A if it holds for (G,N) whenever G/N is isomorphic to A. Wielandt asked for all A, and then all pairs (G,N), for which this is true; this is the problem which Danila and his co-authors have now solved.

(I hope Danila will forgive me an anecdote here. At an Oberwolfach meeting in the 1970s, one of the speakers told us a theorem which took more than a page to state. Wielandt remarked that you shouldn’t prove theorems that take more than a page to state. Yet the solution to his own problem took nearly ten pages to state. I think this is inevitable, and simply teaches us that finite group theory is more complicated than we might have expected, and certainly more complicated than Michael Atiyah expected. Indeed, in the very next talk, Chris Parker told us about work he and his colleagues have done on subgroups analogous to minimal parabolic subgroups in arbitrary groups. This is intended as a contribution to revising the Classification of Finite Simple Groups, and they hoped to show that with an appropriate list of properties only minimal parabolics in groups of Lie type and a few other configurations could arise; they obtained the full list and were rather dismayed by its length, which would make the applications they had in mind very difficult.)

Among other fun facts, I learned that the graph consisting of a triangle with a pendant vertex is called the paw in Yekaterinburg, but is the balalaika in Novosibirsk.

On the last day of the seven-day meeting, we had two talks on dual Seidel switching, by Vladislav Kabanov and Elena Konstantinova, who were using it and a more general operation to construct new Deza graphs and integral graphs.

After a problem session, the conference ended by a virtual tour of Yekaterinburg (or Sverdlovsk, as it was in Soviet times), covering the history, architecture and economics, and illustrated by photographs and historical documents; the tour guide was Vladislav’s daughter.

Life was made more difficult and stressful for me because I was doing something which would have been completely impossible in pre-COVID times: I spent some time moonlighting from the Urals conference to attend ALGOS (ALgebras, Graphs and Ordered Sets) in Nancy, France, a meeting to celebrate the 75th birthday of Maurice Pouzet, which I didn’t want to miss. Many friends from a different side of my mathematical interests were there; as well as Maurice himself, Stéphan Thomassé, Nicolas Thiéry, Robert Woodrow, Norbert Sauer, and many others.

The three hours’ time difference between Yekaterinburg and Nancy meant that there was not too much overlap between the two meetings, so although I missed most of the contributed talks in Nancy, I heard most of the plenaries.

Stéphan Thomassé talked about twin-width, a new graph parameter with very nice properties. Given a graph, you can identify vertices which are twins (same neighbours) or nearly twins; in the latter case, there are bad edges joined to only one of the two vertices; the twin-width is the maximum valency of the graph of bad edges. Bounded twin-width implies bounded treewidth (for example) but not conversely; a grid has twin-width 4. Graphs with bounded twin-width form a small class (at most exponentially many of them), and, remarkably, it is conjectured that a converse also holds.

Jarik Nešetřil and Honza Hubička talked about EPPA and big Ramsey degrees respectively; I had heard these nice talks in Prague at the MCW, but it was very nice to hear them again.

Norbert Sauer talked about indivisibility properties for permutation groups of countable degree. I might say something about this later if I can get my head around it, but this may take some time. In particular, Norbert attributed a lemma and an example to me, in such a way that I was not entirely sure what it was that I was supposed to have proved! (My fault, not his – it was the end of a long day!)

Nicolas Thiéry gave a very nice talk on the profile of a countable relational structure (the function counting isomorphism types of n-element induced substructures), something to which Maurice Pouzet (and I) have given much attention, and on which there has recently been a lot of progress. (I discussed some of this progress here, but there has been more progress since.) In particular, structures whose growth is polynomially bounded are now understood, due to Justine Falque’s work, and for primitive permutation groups there is a gap from the all-1 sequence up to growth 2n/p(n), where p is a polynomial, thanks to Pierre Simon and Sam Braunfeld.

Unfortunately the conference was running on BigBlueButton, some conference-enabling software which I had not encountered before but which is apparently popular in France. I am afraid that it was simply not up to the job. The second day of the conference saw some talks and sessions abandoned, because speakers could not connect; I could sometimes not see the slides at all, and the sound quality was terrible. I discovered that one is recommended to use Chrome rather than Firefox, and indeed it did work a little better for me, but not free of problems. On this showing I would not recommend this system to anyone.

In particular, a beautiful talk by Joris van der Hoeven was mostly lost for me. I couldn’t see the slides. Joris’ explanations were perfectly clear, even without the visuals, but sometimes I lost his voice as well. The talk was about the connections between different infinite systems: ordinals, Hardy fields, and surreal numbers. In better circumstances I would have really enjoyed the talk.

I hasten to add that the problems were completely ouside the control of Miguel Couceiro, the organiser, and marred what would have been a beautiful meeting.



from Peter Cameron's Blog https://ift.tt/3hCsyw9
Peter Cameron

The U.S. Postal Service has been in the news lately.  Does it seem like your mail takes longer to reach where you have sent it now?  Do you wonder why there are delays and whether things will change soon?  Read about the issues and make your own conclusions.

In this activity students appreciate the incredible network of workers and machines that sort and deliver about 150 billion pieces of mail each year.  Students learn how it is all funded and try to decide how the Postal Service could earn more or spend less in their budget.  They finally get to think about whether the post office should be a government run service or a self-sustaining business.

The activity: USPostalService-needs-help.pdf

CCSS: 4.OA, 4.NBT, 4.MD, 5.OA, 5.NBT, 5.MD, 6.RP, 6.NS, 6.SP, 7.RP.A

For members we have an editable Word docx, an Excel sheet showing our data and graphs, and  solutions/teaching-expectations sheet.

USPostalService-needs-help.docx    USPostalServiceData.xlsx    USPostalService-needs-help-solution.pdf



from Yummy Math
Leslie

Consider a fair coin, tossed 100 times to create a sequence of $H$s and $T$s.

A participant is allowed to ask 1 yes or no question (e.g. was the first coin flip heads?), then plays a game where he tries to guess all 100 coins. The participant is awarded $\$1$ for every coin guessed correctly, and loses $\$1$ for each incorrect guess. Find and prove an optimal strategy for the player.

I have a hunch that the optimal strategy may be to ask "Were there more heads than tails?" and then, depending on the answer, proceed to guess either all $H$s or all $T$s. With this strategy, the player is guaranteed nonnegative earnings, and I believe the expected value is $$\sum_{i=0}^{50}{\binom{100}{i}\left(\frac{1}{2}\right)^{99}(100-2i)} \approx \$7.96$$

I've confirmed the expected value with a Monte-Carlo simulation in Python, but I'm having trouble proving that this is optimal.

My best attempt to translate this into more rigorous mathematics is to consider the yes/no question as a partition. Let $X$ be the set of $2^{100}$ possible sequences and $x$ be the sequence rolled. A yes/no question will always partition the set into two. Suppose that set $A$ is the set of all sequences in which the answer to our question is "yes", then the expected value of our game would be $$E[G] = \frac{|A|}{2^{100}}E[G|x\in A]\space + \left(1-\frac{|A|}{2^{100}}\right)E[G|x \notin A],$$

where G is the expected value of the game, playing with some optimal strategy. I've also made the note that given any specific set $A$, $x \in A$ implies there is an optimal (but not necessarily unique) guess. For instance, if we know that there are more heads than tails, a sequence of 100 $H$s is an optimal guess.



from Hot Weekly Questions - Mathematics Stack Exchange
user815048

Inverse scattering, or ”seeing with waves”, is an active topic of research in modern applied mathematics. Examples of its practical applications include weather radar, X-ray crystallography and sonar.

In this video, Professor Fioralba Cakoni and I explain the basics of inverse scattering. We show how properties of islands, such as shape or size, are encoded into waves reflecting off their coasts.

We also discuss the ancient wave-based navigation art of Marshall Islanders.

Enjoy the video![Inverse Scattering 101 (Feat. Fioralba Cakoni)](https://youtu.be/J_Lf5EDy_Wc)

submitted by /u/ssiltane
[link] [comments]

from math https://ift.tt/34KEnwG https://ift.tt/eA8V8J

I love Lie groups actions! My contact with this topic occurred in the context of differentiable manifolds and riemannian geometry courses, and while studying Klein geometries. My knowledge is still very limited, though.

I would like to specialize on a research area in which I could work a lot with Lie groups action from a geometric point of view. It would be very nice if I could find some suggestions/possibilities here. I'm not connected to the university now and I couldn't find pithy information on the internet.

Thank you all, in advance.



from Hot Weekly Questions - Mathematics Stack Exchange
Alice

I know that the derivative of a differentiable function doesn't have to be continuous. How discontinuous can a derivative be?.

Inspired by Limits and continuity of a derivative, I was thinking of defining the notion of pseudo-continuous: $f:(a,b) \to \mathbb R$ is pseudo-continuous at $x \in (a,b)$ if $$ f(x) = \lim_{y\to x} \frac1{y-x} \int_x^y f(t) \, dt .$$ And then I wanted to show that a function is the derivative of a differentiable function if and only if it is pseudo-continuous.

But then I realized that the derivative doesn't have to be Lebesgue integrable, for example $$ f(x) = \frac x{\log|x|} \sin\left(\frac1x\right) , \quad x \in (-\tfrac12,\tfrac12) ,$$ or $$ f(x) = x^2 \sin\left(\frac1{x^2}\right) ,$$ Does there exist a differentiable function $f:(0,1) \to \mathbb R$ such that its derivative restricted to any subinterval of $(0,1)$ fails to be in $L^1$?



from Hot Weekly Questions - Mathematics Stack Exchange
Stephen Montgomery-Smith

Hi guys, Actuary here ( I have a B.A. in mathematics in undergrad).

Wanted to ask about about some graduate opportunities. Once I achieve my final credential (FSA), I want to explore some possible next steps. I know I can always learn on my own but I would like to learn about some possible academic programs.

Do you know of the types of programs available? (besides just a PhD in math).

I can think of financial engineering, quantitative finance, statistics (data science), machine learning?

What else is there?

My current academic/work background is in statistics /financial math / quantitative finance.

Please do not crucify me, I am using google as well but I also wanted to talk to people about it.

Thank you in advanced.

submitted by /u/Noah0
[link] [comments]

from math https://ift.tt/34DHzKy https://ift.tt/eA8V8J

Gnomon by Athanasius Kircher. Astrological symbols are arranged around an arch, with grid lines emerging from each. Some cherubs who look like they know what they're doing are in the middle with measuring sticks and a globe.

Hello, my name’s Christian Lawson-Perfect and my main mathematical interest is “everything”.

Before The Aperiodical existed as its own thing, the only outlet I had for my mathematical eclecticism was a series of posts on the Acme Science blog called Aperiodical Round-Up. Eventually I stopped writing them, as work and family took up more of my time. This post has been sitting in The Aperiodical’s drafts folder for six years. Time to finish it!

Let’s begin with the first 10,000 digits of π dialled on a rotary phone.

10,000 digits is more than I could ever remember. I’m glad I’ve got a reference to come back to if I ever need anything after the first six I’ve memorised. References are handy to have, and it’s hard to find an area of maths that doesn’t have at least one.

There’s an Encyclopedia of Finite Graphs. If you need slightly more but not a lot more whimsy, there’s the House of Graphs and its database of interesting graphs. If graphs aren’t your bag, there’s also a database of L-functions and modular forms.

Counterexamples are harder to categorise, but also worth collecting. Math Counterexamples features such handy objects as a module without a basis.

Giovanni Resta’s Numbers aplenty is an Aladdin’s cave of number facts. It’s like maximalist Number Gossip. There’s a convention to use wordplay when naming families of numbers to help you remember what they mean, like Achilles (powerful but not a perfect power) or emirp (prime forwards and backwards). You really see the limits of this approach when they’re gathered together on one page: any idea what an iban number is when it’s not an International Bank Account Number? Anyway, like Number Gossip, you can type in a number and receive a list of facts about it that are interesting to varying degrees. Weirdly, it thinks it’s notable that $3435 = 3 + 3^5 + 4^3+ 5^5$, but not that $3435 = 3^3 + 4^4 + 3^3 + 5^5$.

If you want an online database of mathematical facts that’s neither authoritative nor useful, but could be mistaken from a distance for an occult conspiracy libel, “196 and other Lychrel numbers” is the site for you. Or rather, it was – it seems to have disappeared since I noted it down a few years ago. Ask yourself this: who stands to gain from its disappearance?

Once you start looking, you see menacing mathematics everywhere: “Kepler, Dürer and the mystery of the forbidden tilings” is a subheading to set the heart racing. Implied conspiracy? No, Imperfect congruence: an essay in four parts and a few appendices by Kevin Jardine, about tilings and the fact that no edge-to-edge regular polygon tiling of the plane can include a pentagon.

Once you start looking, you find that mathematics really is everywhere. Some walls in Leiden have formulas on them; you can thank Jan van der Molen and Ivo van Vulpen for that. Other walls in Leiden have poems on them; it’s important to have a varied diet.

Which Springer Verlag graduate text in mathematics are you? I’m Saunders Mac Lane’s Categories for the Working Mathematician.

Antonio Marquez-Raygoza has made a lovely page about constructing the Sierpiński triangle, with interactive diagrams. If you’ve read and absorbed all of that – I don’t think you have, by the way – you can visit Tadao Ito’s Hyperbolic Non-Euclidean World and Figure-8 Knot. Pack a bag – I reckon I’d need to spend at least a week in Hyperbolic Non-Euclidean World to see all the sights.

Hyperbolic Non-Euclidean World and Figure-8 Knot

For some mathematical enrichment closer to home, Yutaka Nishiyama has thought up 55 examples of mathematics in daily life.

Now, a sequence of links to pictures.

The Picture this maths blog, run by Rachael Boyd and Anna Seigal, features essays on an eclectic range of topics. The schtick is – maybe you can guess – there are lots of pictures.

DivulgaMAT has a collection of Instantáneas Matemáticas – snapshots of maths in old works of art, which I’ve always got time for.

Dataisnature collects examples of the “hand-drawn yet precise diagrams with sparse lines and the occasional dot” aesthetic. A post linking John Cage’s music and nomography has reignited my fascination with nomograms, leading me to Ron Doerfler’s book on The Lost Art of Nomography, and his fabulous 2010 graphical computing calendar. Excitingly, there’s a program called pyNomo which sufficiently powerful wizards can use to produce their own nomograms!

Robert Munafo has drawn a charmingly old-hat ASCII fractal drawing of the Feigenbaum point. However, the connoisseur knows that a true fractal must be printed out on a dot matrix or at least produced with a typewriter: anything else is just sparkling self-similarity.

That’s it for now!



from The Aperiodical https://ift.tt/2G7G8Kb
Christian Lawson-Perfect

I would like to determine the general term of the following sequence defined by an infinite integral: $$ I_n = \int_0^\infty \left| \frac{\sin t}{t} \right|^n \, \mathrm{d}t \, , $$ wherein $n =3, 5, 7, \dots$ is an odd integer.

It can be checked that the integral is convergent for all values of $n$ in the prescribed range. The case of even $n$ is solved in A sine integral $\int_0^{\infty} \left(\frac{\sin x }{x }\right)^n\,\mathrm{d}x$. Also, $I_1 = \infty$. I have tried to use the method of multiple integrations by parts but in vein. I was wondering whether there exists a suitable approach to address this problem more effectively.



from Hot Weekly Questions - Mathematics Stack Exchange
Daddy

The bug is placed at point $(0,0)$. From $(x, y)$ the bug can move to $(x+1, y)$, $(x-1, y)$, $(x, y+1)$, and $(x, y-1)$. Some points are dangerous. To know which points are safe, we check if $$n(|x|)+n(|y|)\le23 \tag{1}$$ where $n(a)$ is the sum of digits of $a$. Question: How large is the area that the bug can access?

Well, if it was just to check the sum of the absolute values of coordinates, then $|x|+|y| \le 23$ would draw a square whose diagonal is $d=46$ and area $d^2/2$. However, with the rule $(1)$ I don't think one gets an ordinary geometrical object.

Any help is appreciated.


Picture by @Jyrki: Below is a picture of the set of points $(i,j)$, $0<i,j<700$, courtesy of Mathematica. The point $(i,j)$ is colored red, if it meets the inequality $(1)$, and blue if it does not. Observe that the axes are "red" up to $698$, JL

enter image description here

[/Edit]


A Python program gives (for 14 instead of 23): enter image description here



from Hot Weekly Questions - Mathematics Stack Exchange
VIVID

I'm (17) at a pretty big (all male) school in Australia (1300ish students from ages 12 to 18). I'm trying to start a club for practising competition mathematics, because it's something I wish I had done but is not really part of the culture of many schools here in Australia.

Some students partake in the Australian Mathematics Competition (same format as AMC) every year but there is no real aspiration for success and certainly, it has never crossed most of their minds to try practicing.

I envision meeting with a group of kids aged 12 to 16 once a week for about 40 minutes and allowing them to attempt problems together (probably all working together on big whiteboards) that I have selected according to their ability with some guidance which I (and perhaps some other students who are quite gifted the same age as me) can provide.

I think it's a really good social ability and it would be nice if we could cultivate that kind of mathematics through problem-solving. Especially since the Australian curriculum tends not to extend the top 20% of students until grades 11 and 12 (17 and 18 year olds).

I'd love some advice on what format something like this should take, perhaps how it is done in your school or what kind of teaching is most valuable.

:)

submitted by /u/PolymorphismPrince
[link] [comments]

from math https://ift.tt/3gFp7U6 https://ift.tt/eA8V8J

Find all pairs of integers $(x, y)$ such that $$x^3+y^3=(x+y)^2.$$

Since $x^3+y^3 = (x+y)(x^2-xy+y^2)$ we get that $$x^2-xy+y^2=x+y$$

this can be expressed as $$x^2-(y-1)x+y^2-y=0.$$

Since we want integers we should probably look at when the discriminant is positive?

$$\Delta = (y-1)^2-4(y^2-y)=-3y^2+6y+1$$

so for $\Delta \geqslant 0$

$$-\frac{2\sqrt3}{3}+1 \leqslant y \leqslant \frac{2\sqrt3}{3}+1$$

only possible solutions are $y=0,1,2.$ However I don't see how this is helpful at all here. What should I do?



from Hot Weekly Questions - Mathematics Stack Exchange
Nate

Every finite abelian group $A$ is isomorphic to its dual group $A^*:=\operatorname{Hom}(A,\mathbb{C}^\times)$. The isomorphism of $A$ with $A^*$ is non-canonical, and one way to make this precise is to say that the functor $A\mapsto A^*$ is contravariant, so this functor cannot be naturally isomorphic to the (covariant) identity functor. I wonder if there is an analogous construction that works for all finite groups. Specifically:

Does there exist a contravariant functor $F$ from the category of finite groups to itself, such that $F(G)$ is isomorphic to $G$ for every group $G$?

The imprecise question I have in mind is: for an arbitrary finite $G$, can one construct a group $G'$ that is non-canonically isomorphic to $G$? The existence of a contravariant functor $F$ as above would be one precise way to answer the imprecise question.



from Hot Weekly Questions - Mathematics Stack Exchange
Julian Rosen

While trying to learn undergraduate topology, I came across this lecture by Dr. Zimmerman who claims "Topology is a generalization of real analysis, a lot of topology anyway." They are obviously related and topology does seem more general, but this statement still surprised me.

Can all of real (and complex) analysis be recast in the framework of topology?

Edit: Could I say that real analysis is just studying the topology of $\mathbb{R}$?



from Hot Weekly Questions - Mathematics Stack Exchange
Mithrandir

Hello i want to ask some advice:

I'm not so good at math, as i failed many classes becoz of math but still i end grinding and get out from my high school with good grade. Now i want to do mathematics, but i don't know if its worth doing something like that:

can someone who isn't a genius at math can do phd and do mathematical research if he work hard enought? Or it is gatekeeped?

submitted by /u/Gabraemalek
[link] [comments]

from math https://ift.tt/32sMSd7 https://ift.tt/eA8V8J

I’m doing something outside of school regarding manifolds, but since I’ve barely any understanding of partial derivatives and other things, I was wondering if anyone has a really basic explanation of it that I could use to better understand my readings. Thank you :))

submitted by /u/aaaaypple
[link] [comments]

from math https://ift.tt/3b11GU0 https://ift.tt/eA8V8J

The National Association of Mathematicians (NAM) has six contributors on the MAA’s Math Values blog. They are Jacqueline Brannon-Giles, Jamylle Laurice Carter, Leona A. Harris, Haydee Lindo, Anisah Nu’man and Omayra Ortega. The NAM is “a non-profit professional organization in … Continue reading


from Blog on math blogs https://ift.tt/3hLlOMJ
racheljcrowell

Here is the problem:

There are 1000 people in a hall. Initially one person had his hand painted. Every second everyone shakes their hand with someone else (in the sense that every second 500 couples form and the two people in the same couple shake hands with each other). In addition no two people can ever shake hands more than once. Of course whenever someone with a painted hand shakes the hand of someone who has it clean, it gets it painted. How much time, at most, is needed to paint all the hands? Prove it.
Clarification: we are only considering games that run the full length, i.e. the game has to be able to get to the last round after which all possible handshakes have occurred, no dead ends allowed. So the question is posed within the framework of such games.

My considerations:

I've tried pretty hard to get the answer for a general n people game, or even for this 1000 people game, but there really seems to be nothing helpful to prove it or even guess it or find it easily for large n, especially given the fact that I have manually bashed the first cases for n = 2,4,6,8,10,12 (the answers being 1,2,3,5,6,8 rounds respectively) which look to have no useful relationship whatsoever between eachother or with n. I think the greedy algorithm is optimal, but I haven't even bothered proving that, since it doesn't really help to find the answer to the problem and prove it, so at times I've just tried to assume it, but even then it didn't quite get me anywhere. Also I don't think there is some beautifully simple symmetry argument to get an answer here, because that should hopefully be reflected in the cases for the first few n, but maybe I am missing it, I couldn't think of anything of that kind.

What I am thinking now is that the answer might be some really complicated non closed form/non elementary function of n, or possibly some not even expressible function of n (this last statement in the sense that it's some function who's values for each given n are defined to be the ones given by such a game like this one, or some isomorphic problem, and there definitely are such kind of functions out there, so this could be a possibility). But if any of these last options I've given are correct, how could one possibly prove that?

Thank you very much for the help, I hope there is someone who can solve this.



from Hot Weekly Questions - Mathematics Stack Exchange
Gen

Frog game

The Frog game is the generalization of the Frog Jumping (see it on Numberphile) that can be played on any graph, but by convention, we restrict the game to Tree graphs (see wikipedia).

The game is simple to play, but it can be hard to determine if it is solvable in a given vertex.

In short, given a graph, the goal of the game is for all frogs to host a party on a single vertex of that graph. Initially, every vertex has one frog. All of the $f$ frogs on some vertex can "jump" to some other vertex if they both have at least one frog on them and if they are exactly $f$ edges apart from each other. If a "sequence of jumps" exists such that all frogs end up on a single vertex, then the party is successfully hosted on that vertex and we call that vertex a "lazy toad" because the frog that started there never jumped.

Formal rules

Let $v,w,u\in V$ be vertices of a graph $G=(V,E)$ and $n=|V|$ the number of vertices.

Let $f_m:V\to\mathbb N$ count the number of frogs on a given vertex where $m$ is the current declared move. A game takes $(n-1)$ moves to solve (if possible) and is played as follows:

  • We start the game on move zero $m=0$ where each vertex has one frog $f_0(v)=1,\forall v\in V$.

  • If it is move $m\lt n-1$ and the following conditions are met:

    1. There exist two vertices $v,w\in V$ that have frogs on them $f_m(v)\ge 1$, $f_m(w)\ge 1$.
    2. There exists a path from $v$ to $w$ containing exactly $f_m(v)$ unique edges.
  • Then, a legal move (or jump) can be made and is denoted as $(v\to w)$. If the move is made, then the following transitions occur:

    1. All frogs jump from $v$ to $w$, that is, $f_{m+1}(v)=0,f_{m+1}(w)=f_m(v)+f_m(w)$.

    2. The remaining frogs don't move, that is, $f_{m+1}(u)=f_{m}(u),\forall u\not\in\{v,w\}$.

    3. We declare the next move $m+1$. (The game ends if we can't move.)

  • We say that the game is solvable in some vertex $w$ if there exists a sequence of legal moves such that we end up moving all frogs to the vertex $w$, which means $f_{n-1}(w)=n$. Consequently, if the game is solvable in $w$ then $f_{n-1}(v)=0,\forall v\ne w$. The solvable vertex $w$ is called a "lazy toad" because the frog on the vertex $w$ never jumped during the game.

For some examples, you can visit a post about this game on mathpickle. There is also an included link to a google drive where all trees with less than $15$ vertices have been computationally solved and categorized by the number of vertices then by the number of non-solvable vertices.

Remark. I've asked a general question about this game before Frogs jumping on trees, which asks if we can characterize solvable and non-solvable vertices of certain sets of graphs, but that appears to be a hard problem. Here, I've decided to restrict the problem to binary trees and only observe the root vertex.



Question

Let $T_h=(V,E)$ be a full binary tree of height $h$. This means that it has layers $0,1,2,\dots,h$ where the root $r\in V$ is on layer $0$. That is, we have $|V|=2^{h+1}-1$. Let $h\in \mathbb N$ because $h=0$ is a single vertex which is trivial.

I would conjecture that for all $h\ge 4$, every vertex of $T_h$ is a "lazy toad" (is solvable).

However, for simplicity, my question is just about the root vertex:

Let $h\in\mathbb N, h\ne 2$. Can we prove that the root $r$ of every such $T_h$ is a "lazy toad" (is solvable) ?

Notation

Let $v(i,j)\in V$ be the $j$th non-root vertex on the layer $i=1,2,\dots,h$ where $j=1,2,\dots,2^i$.

Lets declare a shorthand notation for moves:

  • "$(v\to w)$ then $(w\to u)$" to be equivalent to "$((v\to w)\to u)$".

  • "$(v\to w)$ and $(u\to w)$" to be equivalent to "$(v\to w \leftarrow u)$".

For clarity, let $f=f_m(v)$ in "$\left(v\xrightarrow{f} w\right)$" stand for the number of frogs being moved.

Alternative suggestions for notation are welcome.



Examples (Reducing $T_3,T_4,T_5$ to $T_1$)

The fact that we can reduce examples to smaller examples makes me think that maybe induction is possible. Notice that $T_1$ was solved in the original version of the game where all vertices are on a line and that $T_2$ is not solvable, so $T_3$ is the first "real" case.


$$(h=1)$$ The trivial case $T_1$ is trivially solvable in all vertices:

  • [1.1] In $r$ using legal moves $v(1,1)\xrightarrow{1} r$ and $v(1,2)\xrightarrow{1} r$ in any order.
  • [1.2] In $v(1,1)$ by using moves $\left(r\xrightarrow{1} v(1,2)\right)\xrightarrow{2} v(1,1)$.
  • [1.3] In $v(1,2)$ by using moves $\left(r\xrightarrow{1} v(1,1)\right)\xrightarrow{2} v(1,2)$.


$$(h\ne 2)$$ The second case $T_2$ is not solvable in the root. This can be checked by listing all move sequences.


$$(h=3)$$ Lets break down the $T_3$ case. This case can be looked as a "base" trivial $T_1$ case that has two trivial $T_1$ cases "connected" to every leaf vertex. Let the vertices of those four "connected" trivial cases $T^{(1)}_1,T^{(2)}_1,T^{(3)}_1,T^{(4)}_1$ be indexed as: $v_1,v_2,v_3,v_4$ and let the vertices of the "base" trivial $T^{(0)}_1$ case be indexed as $v_0$.

That is, we can observe the $T_3$ as "composition" of $T_1$'s:

enter image description here

Now we can see that the solution in the root of $T_3$ follows from the solution of $T_1$. That is:

  1. solve the "connected" $(T^{(i)}_1,i\in\{1,2,3,4\})$'s in a leaf vertex using [1.3]: $$\left(r_i\xrightarrow{1} v_i(1,1)\right)\xrightarrow{2} v_i(1,2),$$

  2. then do the moves $v_i(1,2)\xrightarrow{3} r_0$ for $i\in\{1,2,3,4\}$ to transfer those frogs to root $r_0$,

  3. then finally just solve the base $T^{(0)}_1$ with [1.1]: $v_0(1,1)\xrightarrow{1} r_0$ and $v_0(1,2)\xrightarrow{1} r_0$.

That is, the $T_3$ case is solved using the solutions of the trivial $T_1$ case.

In other words, if we know the solutions of $T_1$, we can reduce the $T_3$ to $T_1$ again.


$$(h=4)$$ Similarly as in the previous case, we can observe $T_4$ as "composition" of two connected $T_2$'s to each of the two leafs of one "base" $T_1$. That is, we "reduce" the $T_4$ to $T_1$ by "solving" the four $T_2$'s by moving all frogs from their vertices to the root of the "base".

We can "reduce" ("solve") $(T^{(i)}_2,i\in\{1,2,3,4\})$'s using following two move sequences:

  1. $v_i(2,1)\xrightarrow{1} v_i(1,1)$ and $v_i(2,2)\xrightarrow{1} v_i(1,1)$, then $\left(v_i(1,1)\xrightarrow{3} v(2,4)\right)\xrightarrow{4} r_0$.

  2. $r_i\xrightarrow{1} v_i(1,2)$ and $v_i(2,3)\xrightarrow{1} v_i(1,2)$, then $v_i(1,2)\xrightarrow{3} r_0$.

The vertices from the first (second) move sequence are colored blue (green) here in the $T_2^{(i)}$:

enter image description here

After the reduction, we are left with the "base" $T_1$ which is solved with [1.1] from the trivial case. In other words, we essentially moved all frogs from layers $2,3,4$ of $T_4$ to the root, leaving us only with layers $0,1$ which is equivalent to $T_1$.


$$(h=5)$$ Similarly to previous two cases, this $T_5$ case can be reduced to $T_1$ by solving four $T_3$'s. That is, we can reduce the layers $2,3,4,5$ by moving all their frogs to the base root. To do this, there are three separate move sequences highlighted in the following image (each puts $5$ frogs in a leaf vertex before moving the frogs to the base root):

  1. $\left(\left(v_i(2,1)\xrightarrow{1} v(3,1)\right)\xrightarrow{2} v_i(1,1)\xleftarrow{1} r_i\right)$, then $\left(v_i(1,1)\xrightarrow{4}v_i(3,8)\right)\xrightarrow{5}r_0.$

  2. $\left(v_i(3,3)\xrightarrow{1} v_i(2,2)\leftarrow{1} v_i(1,2)\right)$, then $\left(\left(v_i(2,2)\xrightarrow{3} v_i(1,2)\right)\xrightarrow{4}v_i(3,2)\right)\xrightarrow{5}r_0.$

  3. $\left(v_i(3,5)\xrightarrow{1} v_i(2,3)\xleftarrow{1} v_i(3,6)\right)\xrightarrow{3} v_i(3,7)$, then $\left(v_i(2,4)\xrightarrow{1}v_i(3,7)\right)\xrightarrow{5}r_0.$

enter image description here


$$(h\ge 6)$$ These examples so far motivate the following question:

Is the reduction from $T_h$ to $T_1$ by solving four $T_{h-2}$'s always possible?

This could be one way to solve the problem. On the other hand, are there any other reductions?



Reduction pattern based on $T_6,T_7$

Since reductions from $T_h$ to $T_1$ over $T_{h-2}$'s seem to get more complex as $h$ grows, this begs the question if there are any other simpler reductions.

It is sometimes possible to reduce $T_h$ to $T_{h-2}$ over $T_1$'s. Take a look at the following examples.

Let $k=0,1,2,\dots,2^{h-2}-1$. If we look at the $T_6$, we can reduce its top two layers (reuducing it to $T_{4}$) using the following:

  1. $v(6,4k+1)\xrightarrow{1} v(5,2k+1)$ and $v(6,4k+2)\xrightarrow{1} v(5,2k+1)$,

  2. $\left(v(5,2k+2)\xrightarrow{1} v(6,4k+3)\right)\xrightarrow{2} v(6,4k+4)$,

  3. $\left(v(5,2k+1)\xrightarrow{3} v(6,4k+4)\right)\xrightarrow{6} r$.

Similarly, we can reduce the $T_7$ to $T_5$ by using steps 1., 2. from above but modifying step 3. as (Note that you also modify the layers from $5$ to $6$ and $6$ to $7$ in all steps, of course.)

  1. $\left(v(7,4k+4)\xrightarrow{3} v(6,2k+1)\right)\xrightarrow{6} r$.

We can ask if there are any other trees $T_h$ whose top two layers can be reduced.

Notice that we are collecting $T_1$'s in this reduction, that they contain $3$ vertices (frogs) each and that the total number of used $T_1$'s in our move sequence(s) must be a power of $2$ to completely reduce the top two layers, where we have $2$ possible layers $\{h,h-1\}$ to put the frogs on before jumping to root. Because of this, it is meaningful to try to extend this pattern to the full binary trees of the following kind:

If $h\in\{3\cdot2^t,3\cdot2^t+1\},t\in\mathbb N$, for which $t$'s is such $T_h$ reducible to $T_{h-2}$?

This now does smell like something that would be used in an inductive proof. But, even if this pattern holds for all $t$, it alone is not enough. The question is, can we find a sufficient set of reduction patterns?

Let $s\lt h$ and $h\gt 2$. Is it true that every $T_h$ can be reduced to some $T_s$ ?



Trivial reduction pattern

If $h=2^t+t-2,t\ge 2$ and $T_{t-1}$ is solvable in root, then $T_h$ can be reduced to $T_{h-t}$.

This is trivially true because the distance of the root of "$T_{t-1}$ that spans the top $t$ layers" to the "(base) root $r=r_0$", is precisely equal to $2^t-1$, which is the total number of frogs in $T_{t-1}$.

For example, if $t=4$ we have $h=18$. Solving top $t=4$ layers as a $(T_{t-1}=T_{3})$'s in root, puts $15$ frogs on a vertices that are on layer $15$. These frogs can jump to the root of $T_h$ which leaves us with $T_{h-t}=T_{14}$.

Both this and the previous reduction pattern are exponential and are not enough to cover all values of $h$ in the inductive argument. That is, we need more statements like this to cover all values.



Other examples

I've managed to find reductions for $T_h$ for $h$ up to $20$ by hand to show they are all solvable in the root, but I still don't see how I could complete the inductive argument or solve this problem in some other way for all $h\ne 2$.

For example, I can reduce $T_8,T_9,T_{10},T_{11}$ to $T_5,T_5,T_6,T_7$ respectively.

Then, $T_{12},T_{13}$ belong to $t=2$ from the above reduction pattern for $h\in\{3\cdot2^t,3\cdot2^t+1\}$ and can be reduced to $T_{10},T_{11}$. It is not as easy as $t=1$, but the move sequences do exist.

After those, I can reduce $T_{14},T_{15},T_{16},T_{17}$ to $T_{11},T_{11},T_{12},T_{13}$.

Then, $T_{18}$ belongs to $t=4$ of the trivial reduction pattern which means it can be reduced to $T_{14}$.

For $T_{19}$ I found a reduction to $T_{14}$,... and so on.

Note that these reductions are not the only ways to solve a tree in the root.



Necessary reduction condition

Notice that to reduce $T_h$ to $T_{h-a}$ for some $a\ge 2$, we need to move all frogs from top $a$ layers to the root. The number of frogs we are moving to the root in such reduction is $(2^a-1)2^{h-a+1}$. That is, we are shaving off the $T_{a-1}$'s at the top layers. For such a reduction to be possible, there must exist a set of moves that transfers the frogs to the root $r$

$$M_r=\{v(i,j)\xrightarrow{f_{i,j}}r\},\text{ where } i\in\{h,h-1,\dots,h-a+1\}.$$

Assuming such moves exist at some point in the game, they are legal only if $f_{i,j}=i$ where $i$ is the layer on which the vertex $v(i,j)$ is on. On the other hand, the sum of $f_{i,j}$'s must be equal to the total number of frogs that we are reducing (moving to root). That is, for $M_r$ to be able to exist, we necessarily need to be able to partition the number of frogs we are moving onto the corresponding layers, and this is essentially what we state in the necessary condition.

We also care if our move sequences repeat or have patterns (for example, see the use of $k$ in the $T_6,T_7$ reduction pattern), because then instead of considering all frogs at once, we can consider only $(2^a-1)2^{b},b\in[0,h-a+1]$ frogs for some $b$, and repeat the same move pattern $(2^{h-a+1}/2^b)$ times.

Based on all of this, we can state the following necessary condition:

(Necessary condition for reduction). Let $h\gt2,a\ge2$. If reduction from $T_h$ to $T_{h-a}$ is possible, then there exists $b\in[0,h-a+1]$ and a partition of the number $(2^a-1)2^b$ into the layers $H=\{h,h-1,h-2,\dots,h-a+1\}$ where $(h-l)\in H$ is used at most $(2^{a-l-1})2^b$ times.

Of course, the converse does not hold because this is only a necessary (not a sufficient) condition.

For an example, if we want a $T_h$ to $T_{h-2}$ reduction then we need to partition $3\cdot 2^b$ into $\{h,h-1\}$. If we assume that the solution is "simple", i.e. that all frogs are either on layer $h$ or layer $h-1$ (but not mixed between both layers), we get the following pattern:

  • $h$ must either be of form $3\cdot 2^t$ or $3\cdot 2^t+1$. This was essentially our assumption on $h$ when we talked about extending the $T_6,T_7$ pattern. But now we also need a sufficient condition. That is, if we want to prove this is possible for every $t$, then we need to for every $t$ find a sequence of corresponding legal moves or prove it exists.

Solvable partitions?

Surprisingly, the above condition is already very useful (efficient). I have that for all $h\in(2,20)$ I've solved so far, I needed to consider only $31$ or less vertices ($a\le 5,b\le 3$) to show that all vertices can be reduced to the root (that we can send all frogs to the root).

For example, the $T_{19}$ has over a million vertices! But, to solve it, one needs to consider just one $T_4$ (which has only $31$ vertices) at the top layers to find a solution! (Consider $a=5,b=0$ and the corresponding partition $31=15+16$ is solvable.) That is, the reduction pattern of that $T_4$ can simply be repeated on all adjacent $T_4$'s in the top five layers, which gives us a reduction to $T_{14}$ which we have already solved by reduction to $T_{11}$ then to $T_5$ then to $T_1$.

Essentially, we need to prove that at least one of the partitions given by the "(Necessary condition for reduction)" always allows the reduction to a smaller binary tree (that the converse holds for at least one partition). Then by induction, we have that they are all reducible to $T_1$.

I don't yet see how to construct move sequences for all cases of $h$. Alternatively, I do not know if we can maybe prove it without explicit construction of solutions?



from Hot Weekly Questions - Mathematics Stack Exchange
Vepir

Contact Form

Name

Email *

Message *

copyrighted to mathematicianadda.com. Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget

Blog Archive