Frequently Asked Questions
Frequently Asked Questions
Here are some common questions and answers related to my work, the Seymour Second Neighborhood Conjecture, and the Graph Level Order data structure. If you have any additional questions, feel free to reach out.
About the Problem
The problem is the Seymour Second Neighborhood Conjecture. It was first posed in 1990, and has stood as a major open problem in mathematics and computer science. It involves oriented graphs and questions whether there exists a node that has at least as many second out-neighbors as first out-neighbors. Solving this problem has implications for understanding graph theory, algorithm design, and many real-world applications like network analysis.
To my understanding, the problem was first posed in 1990. What that means is that this problem has been studied for decades. Across that time, it has attracted many disciplines and fields of mathematicians and computer scientists alike. This importance is not just theoretical significance but also these practical applications.
There is an extensive list of researchers who have contributed to the body of literature on the subject. This includes respected names such as Fisher, Havet and Thomassé, Kaneko and Locke. There have been significant partial results published and alternative conjectures proposed. However, the lack of a comprehensive approach incorporating both global and local perspectives likely limited progress.
About the Proof
There are a few elements that differentiate my approach. First, I began with a proof by contradiction assumption where I assumed that no node satified the conjecture. From there, I approached the problem similar to a house of cards where I used BFS-layers to add constraints to each node of the graph. Within these BFS-layers, I add an element of feature engineering. I define new terms called interior neighbor and exterior neighbor to better account for first neighbor and second neighbor. Then I show these terms (along with back arcs) form a partition of a node's degree unlike first out-neighbor and second out-neighbor. These stronger definitions are what is needed to tackle the slippery conjecture.
The well-ordering property ensures that any subset of elements has a least element. For this problem, we had to first establish partitions. These are precisely the BFS-layers. By doing this, rooted at the minimum degree node, it forms a well-ordering. Not only that, but a partial ordering. This gives two things. First, it gives a stronger use case for mathematical induction. Secondly partial order gives the possibility of Hasse diagrams which helps to visualize the graphs.
Seymour originally asked if a node's degree would double in an oriented graph. The Decreasing Neighborhood Sequence Property (DNSP) looks at the converse of this. It calls nodes non-Seymour and asks what happens if a nodes second out-neighborhood is strictly smaller than its first out-neighborhood. This is the direct name given to the proof-by-contradixtion assumption. Remember that this assumption is made for every node in the graph. Likewise, the DNSP assumption must also be made for every node in the graph.
This proof is intricate and very detailed to the Seymour Second Neighborhood Conjecture. However, there are several elements that may be able to be lifted to other problems. That is currently an active area of research. One such question is noticing how BFS trees were lifted from a simple algorithm result into a data structure capable of helping to solve this problem. Is this unique to the BFS tree, or are there other algorithmic results that (that organize data) are being overlooked that can also be used as data structures?
My proof rigorously shows that no counterexamples can exist under the given conditions. On this website, I also include a section discussing possible refutations and why they do not hold. I unerstand that the mathematical community may wish to challenge some of my assumptions or definitions or even my claims, but I stand behind the mathematical proofs and my results.
The most challenging part was coming up with my own definitions. Graph theory is a field with over 200 years of history and a wealth of established literature. Writing formally while staying consistent with existing terminology was crucial, but introducing new ideas meant I often couldn't rely solely on references to explain what I was doing.
I had to define those terms myself, and that's a tricky process. On one hand, the definitions needed to be rigorous and clear enough to be understood by others. On the other hand, they had to align with the context of my research without causing confusion or conflicting with existing concepts. It was a balancing act between innovation and clarity, and sometimes I had to refine my definitions multiple times before they resonated with readers. Honestly. it's a walk I'm still trying to walk.
To add a third layer to these definitions, I also had to justify their existence. If a definition is truly worthwhile I should be able to justify it, so when I was able to state that my definitions could be partitions while standard graph theoretic ones could not, it felt good. A second and third application were using my definitions to show that transitive triangles and Seymour diamonds were no longer a problem for the conjecture. Brantner et al. had shown these to be stumbling blocks so much so that in their paper they showed that any oriented graph without transitive triangles trivially produces a degree-doubling node.
I had to define those terms myself, and that's a tricky process. On one hand, the definitions needed to be rigorous and clear enough to be understood by others. On the other hand, they had to align with the context of my research without causing confusion or conflicting with existing concepts. It was a balancing act between innovation and clarity, and sometimes I had to refine my definitions multiple times before they resonated with readers. Honestly. it's a walk I'm still trying to walk.
To add a third layer to these definitions, I also had to justify their existence. If a definition is truly worthwhile I should be able to justify it, so when I was able to state that my definitions could be partitions while standard graph theoretic ones could not, it felt good. A second and third application were using my definitions to show that transitive triangles and Seymour diamonds were no longer a problem for the conjecture. Brantner et al. had shown these to be stumbling blocks so much so that in their paper they showed that any oriented graph without transitive triangles trivially produces a degree-doubling node.
Feedback is crucial. I knew what I wanted them to say. I was seeing one thing and I was saying "this is an interior neighbor" and "this is an exterior neighbor." However, that early feedback let me know that my initial definitions were not precise enough, and it was leading to readers being confused. I had to make the definitions based on two terms instead of just one term. By doing this, suddenly things started to make sense. The problem was not solved just yet but I was able to break a lot more ground.
Absolutely. It was a game changer, especially for concepts like rooted neighborhood. This was ultimately the largest definition in the entire paper. Looking back and seeing that these rooted neighborhoods correlate with BFS-layers is so simple now.
The problem in 2020 was back arcs, and the transitive triangles they presented. Trying to explain things with only terms like dist(u, v) was not enough. Then I realized that every node had a special distance from the root node, and I defined the rooted neighbothoods.
Now back arcs travelled from a late rooted neighborhood to an earlier rooted neighborhood. This new definition allowed me to investigate the possibilities of such an arc. The situation that was hypothesized was that the (then) most important lemma (Decreasing Exteriors) would fail with back arcs. What happens though is that oriented graphs have a structure without back arcs where this lemmma holds. Once the back arcs are added back, this lemma will still hold.
There is a simpler way to see the concept of decreasing exteriors through the interior neighbors. The interior neighbors are actually forward facing transitive triangles. These transitive triangles form a set covering of their parent's first neighborhood. That set covering puts a every child into a cycle, which makes their out-degree at least 1 in that first neighborhood. That means we can decrease the size of the second neighborhood.
The problem in 2020 was back arcs, and the transitive triangles they presented. Trying to explain things with only terms like dist(u, v) was not enough. Then I realized that every node had a special distance from the root node, and I defined the rooted neighbothoods.
Now back arcs travelled from a late rooted neighborhood to an earlier rooted neighborhood. This new definition allowed me to investigate the possibilities of such an arc. The situation that was hypothesized was that the (then) most important lemma (Decreasing Exteriors) would fail with back arcs. What happens though is that oriented graphs have a structure without back arcs where this lemmma holds. Once the back arcs are added back, this lemma will still hold.
There is a simpler way to see the concept of decreasing exteriors through the interior neighbors. The interior neighbors are actually forward facing transitive triangles. These transitive triangles form a set covering of their parent's first neighborhood. That set covering puts a every child into a cycle, which makes their out-degree at least 1 in that first neighborhood. That means we can decrease the size of the second neighborhood.
I think of it like building a house of cards, or like testing software. We want to start small simply test our understanding. This BFS approach is new and different. I had no idea how it would work or the results I would get. Starting with large graphs may have given me something where I could not visually see the difference between success and failure.
Some people want to reach the top and create something grand. I try to teach my children to practice patience. They need to understanding that when they're building a house each individual card is important. The foundation must be placed carefully. By starting with these small graphs, much like how Kaneko and Locke's paper, I was able to search for patterns and simply ask if they would hold up in mid sized and larger graphs. This allowed me to build a solid foundation before scaling up.
This is an important consideration in complexity analysis because not everyone has a super-computer or cloud storage. This problem is different than a Ramsey theory type problem where they are only considering what happens in large graphs. I think it was very important to start small here.
It may be tempting to dive into larger graphs right away, you have to question how are you building those larger graphs. What is the structure being used? Are there any set theooretic or graph theoretic principles linking them all together, or is it all random? There is nothing wrong with random data, as it can be very useful. If we can find a stronger link though, we should try.
Some people want to reach the top and create something grand. I try to teach my children to practice patience. They need to understanding that when they're building a house each individual card is important. The foundation must be placed carefully. By starting with these small graphs, much like how Kaneko and Locke's paper, I was able to search for patterns and simply ask if they would hold up in mid sized and larger graphs. This allowed me to build a solid foundation before scaling up.
This is an important consideration in complexity analysis because not everyone has a super-computer or cloud storage. This problem is different than a Ramsey theory type problem where they are only considering what happens in large graphs. I think it was very important to start small here.
It may be tempting to dive into larger graphs right away, you have to question how are you building those larger graphs. What is the structure being used? Are there any set theooretic or graph theoretic principles linking them all together, or is it all random? There is nothing wrong with random data, as it can be very useful. If we can find a stronger link though, we should try.
For Mathematicians
I am preparing the paper for submission to a mathematical journal. Details will be provided once it is accepted.
While the proof itself is rigorous and adheres to strict mathematical standards, the process that led to it was not purely mathematical. The key insight came from a computational perspective. A data structure and an algorithm revealed the underlying structure of the problem. This algorithm provided the intuition to establish the core mathematical properties: the Decreasing Neighborhood Sequence Property (DNSP) and the Graph Level Order data structure.
Thus, the solution bridges mathematics and computer science. The algorithm highlights the computational nature of the problem, while the proof validates its correctness. I would refrain from labeling it "purely" anything.
Thus, the solution bridges mathematics and computer science. The algorithm highlights the computational nature of the problem, while the proof validates its correctness. I would refrain from labeling it "purely" anything.
The proof builds on foundational concepts in combinatorics. By utilizing BFS, Hall's Marriage Theorem, Hopcroft-Karp, posets, Hasse diagrams, and Algebraic Group theory to prove some of the major results, this conjecture should now sit comfortably among the existing literature. Further, the proof uses partition, data structures, divide and conquer concepts, mathematical induction, and proof by contradiction to resolve the conjecture.
For Computer Scientists
The problem has implications for understanding algorithms, data structures, and computational complexity, particularly in the context of graph-based problems. In a large sense it shows that there are problems that we as computer scientists can be solving in the mathematical world. We can be creating new data structures and algorithms to tackle them.
While the proof is primarily theoretical, its insights into oriented graphs. The algorithm can be applied to various domains including social media, epidemiology, and security. Second, the Graph Level Order data structurecan can influence influence an even broader class of areas like network optimization, algorithm design, and complexity theory.
Yes! The website includes interactive visualizations and explanations designed to make the proof more accessible to those without a deep mathematical background.
While solving the SSNC, I noticed that most research papers addressing the conjecture were published in combinatorics or general mathematics sections. There was very little representation in computer science-focused venues. This surprised me because the conjecture sits at the heart of both disciplines. It is inherently mathematical but also deeply algorithmic, involving graph structures that computer scientists regularly work with.
Looking forward, I think part of the issue lies in how problems are categorized and communicated. Problems like the SSNC are often framed purely as mathematical challenges, which can limit interdisciplinary collaboration. Yet, these problems offer opportunities for unifying the approaches of math and computer science, leading to deeper insights and practical algorithms. My hope is that my work not only solves the SSNC but also highlights the importance of interdisciplinary thinking for tackling similar open problems in the future.
Looking forward, I think part of the issue lies in how problems are categorized and communicated. Problems like the SSNC are often framed purely as mathematical challenges, which can limit interdisciplinary collaboration. Yet, these problems offer opportunities for unifying the approaches of math and computer science, leading to deeper insights and practical algorithms. My hope is that my work not only solves the SSNC but also highlights the importance of interdisciplinary thinking for tackling similar open problems in the future.
I've always believed that mathematics, when possible, should be taught with pictures. In this proof, I introduced my own notation and way of working with graphs in the paper. While it doesn’t change the proofs themselves, I wanted to do everything I could to make sure these definitions and concepts were as clear as possible. Visualization became a critical tool for achieving that clarity.
To start at the beginning, we'll start at the minimum degree node. Now I start my papers with some initial lemmas. Those were some of my first walks through the world of oriented graphs.
It wasn't long after that that I came up with the definition of Decreasing Neighborhood Sequence Property. And that's when things started to get interesting. I started drawing graphs and seeing patterns, but I need to test more cases. What about back arcs? Can I predict this?
I found that simply writing a computer program was easier than doing each example by hand. These first examples were on the JavaScript canvas. These programs also included an information table to let me know about the nodes, their first and second degrees.
Doing those programs helped my thinking evolve. Once I saw certain patterns, I knew what I was looking for, and looking to prove. The proofs still weren't easy, but I wasn't lost in the wilderness.
It wasn't long after that that I came up with the definition of Decreasing Neighborhood Sequence Property. And that's when things started to get interesting. I started drawing graphs and seeing patterns, but I need to test more cases. What about back arcs? Can I predict this?
I found that simply writing a computer program was easier than doing each example by hand. These first examples were on the JavaScript canvas. These programs also included an information table to let me know about the nodes, their first and second degrees.
Doing those programs helped my thinking evolve. Once I saw certain patterns, I knew what I was looking for, and looking to prove. The proofs still weren't easy, but I wasn't lost in the wilderness.
Potential Critiques
This work represents a significant personal achievement for me, and I want to ensure it’s not just shared but truly understood. A traditional paper has limitations - it’s static, bound by strict formatting rules, and offers no room for clarification or elaboration once published. What if a reviewer misinterprets an arrow in a definition, or a key visualization isn’t clear enough? I don’t want to risk my work being dismissed or misunderstood due to something I could have addressed with better tools.
Not only that, generating images in LaTeX and TikZ is different than D3.js and canvas. This website gives me another possibility that, even if the reviewers never visit, I can utilize and take their words seriously without being an expert at those images.
A website allows me to take every step necessary to ensure my work is received as I intend. I can use animations, interactive diagrams, and explanatory notes to make every concept crystal clear. Visitors can interact with the ideas, explore visualizations at their own pace, and even ask questions if something isn’t clear. This isn’t just about presenting my work; it’s about starting a dialogue. I want to anticipate misunderstandings and prevent them before they happen.
By creating a space where my ideas can be explored dynamically and revisited as they evolve, I’m giving myself the best chance of ensuring this achievement is recognized and appreciated.
Not only that, generating images in LaTeX and TikZ is different than D3.js and canvas. This website gives me another possibility that, even if the reviewers never visit, I can utilize and take their words seriously without being an expert at those images.
A website allows me to take every step necessary to ensure my work is received as I intend. I can use animations, interactive diagrams, and explanatory notes to make every concept crystal clear. Visitors can interact with the ideas, explore visualizations at their own pace, and even ask questions if something isn’t clear. This isn’t just about presenting my work; it’s about starting a dialogue. I want to anticipate misunderstandings and prevent them before they happen.
By creating a space where my ideas can be explored dynamically and revisited as they evolve, I’m giving myself the best chance of ensuring this achievement is recognized and appreciated.
I welcome critiques as part of the scientific process. My website includes a section addressing common objections and potential counterexamples, explaining why they do not hold under the framework of my proof.
Further, in the peer review process, both the editor and reviewers are likely to give me insifhtful feedback. Those will be away from the website and via the journal template. Still, it will be reviews and critiques of my approach.
Further, in the peer review process, both the editor and reviewers are likely to give me insifhtful feedback. Those will be away from the website and via the journal template. Still, it will be reviews and critiques of my approach.
This is why I enjoy mathematics. It is not about trusting the person, or where they are from, or any of that other stuff. I released a paper to the public. In that paper, I have written up over 20 years of time spent on this problem. Most of that time spent has been in working on web sites, and only since 2019 have I been actively writing papers. But the results are all there. The assumptions, the proofs, the literature review, everything.
Why should people trust my proof? Because I detailed the proof out step by step. There is nothing "left to the reader". Even steps that may be unclear, I am happy to clarify. Where possible, I tried to provide illustrative examples.
So, trust the mathematics.
Why should people trust my proof? Because I detailed the proof out step by step. There is nothing "left to the reader". Even steps that may be unclear, I am happy to clarify. Where possible, I tried to provide illustrative examples.
So, trust the mathematics.
About My Journey
The journey started in 2003 when I was first looking to go to graduate school. I found a web site that "Open Problems for Undergraduates". As a subsection, it had graph theory. I thought I solved the problem in a few hours and sent the solution to Dr. Nate Dean, who immediately told me I had it wrong.
While I was in graduate school, taking qualifying exams and during off times of research, I would often turn back to this problem. I sometimes felt bad for working on it because it was listed as being for "undergraduates". However, I just couldn't get the problem out of my mind and that goes a long way to stop boredom.
My initial goal to understand this problem was to just understand concepts like 'oriented graph' and 'the square of a graph'. Then I just kept playing with more and more concepts. Understand that it took me a while to program these things up correctly.
Once I was able to program the square of a graph, it was more about understanding what I was seeing because that was difficult to grasp. Over this 15+ years, I had built an extensive JS-canvas graph theory library over a decade and it really helped. I leaned on the canvas elements I had been programming for graph algorithms and it helped.
While I was in graduate school, taking qualifying exams and during off times of research, I would often turn back to this problem. I sometimes felt bad for working on it because it was listed as being for "undergraduates". However, I just couldn't get the problem out of my mind and that goes a long way to stop boredom.
My initial goal to understand this problem was to just understand concepts like 'oriented graph' and 'the square of a graph'. Then I just kept playing with more and more concepts. Understand that it took me a while to program these things up correctly.
Once I was able to program the square of a graph, it was more about understanding what I was seeing because that was difficult to grasp. Over this 15+ years, I had built an extensive JS-canvas graph theory library over a decade and it really helped. I leaned on the canvas elements I had been programming for graph algorithms and it helped.
Well, I wouldn't be being honest if I said I didn't want to solve the problem. But who thinks they're going to solve an open problem. When I tried in 2003 and failed, I thought I had shot my shot. My larger plan was more to understand things like graph theory, data science, machine learning and artificial intelligence.
Once I started noticing patterns, I started thinking as a maching learning person and saying to myself, what are the features of this dataset - the dataset being all oriented graphs. That's when I came up with my first set of definitions, interior and exterior neighbors.
The more I leaned on the machine learning aspect, the more I would come up with these definitions. This was particularly true when I was at a stumbling block in my research. I found that new definitions were able to blow that block down.
Once I started noticing patterns, I started thinking as a maching learning person and saying to myself, what are the features of this dataset - the dataset being all oriented graphs. That's when I came up with my first set of definitions, interior and exterior neighbors.
The more I leaned on the machine learning aspect, the more I would come up with these definitions. This was particularly true when I was at a stumbling block in my research. I found that new definitions were able to blow that block down.
With a PhD in Applied Mathematics and a strong background in computer science, I simply went towards what I enjoyed. I enjoy both fields, and I have no problem serving as a bridge between the two fields.
I also mentioned the machine learning aspect of this. That is important because while I have dealt with small graphs as examples, the set of all oriented graphs is infinite in size. So without some way to recognize the patterns and group things together, I couldn't conceive of a way to solve this problem. Machine learning gave a way to make thie infinite intractible set into manageble patterns.
I also mentioned the machine learning aspect of this. That is important because while I have dealt with small graphs as examples, the set of all oriented graphs is infinite in size. So without some way to recognize the patterns and group things together, I couldn't conceive of a way to solve this problem. Machine learning gave a way to make thie infinite intractible set into manageble patterns.
For the Public
While the problem is technical, its solution demonstrates the power of human creativity and persistence. The methods used to solve it could inspire new ways of thinking in diverse fields. There are also practical applications of the Seymour Second Neighborhood Conjecture to things like social media and computer networking. The Graph Level Order data structure extends this beyond the proof and the problem itself to the concept of a well-ordering in graph theory which has applications in many applications in the real world.
The website includes interactive tools and visualizations that explain key ideas in simple terms. These resources are designed to make the proof accessible to a wide audience.
Yes, the concepts I’ve developed have potential applications in graph theory and data science. The data structure used to represent these containers has been recognized for its novelty. I currently have a patent pending for it. This highlights not only the originality of the research but also its potential for practical use in applications that require efficient and dynamic graph representations.
One of the most frustrating aspects of research is the sheer difficulty of discovering open problems. It's easy to hear about famous conjectures like Goldbach’s Conjecture, the Collatz Problem, or Twin Primes. These are widely known and have a clear place in public discourse. But the reality is that there are hundreds of thousands of other problems. All these other problems are equally deserving of attention, but they aren't as well-known or discussed.
The challenge is not just finding these problems, but understanding their potential and how they fit into the broader research landscape. Many of these problems are deeply technical or niche. Without the right networks, it's easy to miss them. This creates a barrier for people who may be interested in contributing to new discoveries but aren’t aware of the problems that could use their skills and fresh perspectives.
There are platforms that are beginning to address this issue. For example, sites like Open Problem Garden and Wikipedia's Unsolved Problems in Mathematics. These platforms offer a space where researchers can find a collection of open problems that aren’t always part of the mainstream discussion but are valuable in their own right. If more people were aware of these resources and made an effort to explore them, we might see a greater interest in solving these problems and more opportunities for collaboration across disciplines.
The challenge is not just finding these problems, but understanding their potential and how they fit into the broader research landscape. Many of these problems are deeply technical or niche. Without the right networks, it's easy to miss them. This creates a barrier for people who may be interested in contributing to new discoveries but aren’t aware of the problems that could use their skills and fresh perspectives.
There are platforms that are beginning to address this issue. For example, sites like Open Problem Garden and Wikipedia's Unsolved Problems in Mathematics. These platforms offer a space where researchers can find a collection of open problems that aren’t always part of the mainstream discussion but are valuable in their own right. If more people were aware of these resources and made an effort to explore them, we might see a greater interest in solving these problems and more opportunities for collaboration across disciplines.
Technical Questions
Yes, the website includes some computational examples and visualizations. These tools allow users to interact with the concepts and verify key properties.
I encourage others to explore the implications of this proof and test its ideas in related areas. Collaboration and further research will be key to fully understanding its impact.
Next Steps
Future work may explore how the methods used in this proof can be generalized to other problems in mathematics and computer science. I’m also interested in collaborating with others to find practical applications of these results
Check the website regularly for updates on the publication process, new resources, and potential collaborations.