Again, it is a very nice paper. My comments about the current version can be taken as recommendations rather than requirements. Or, if any of it is required, it would be at the discretion of the editor. 1) Although I am not going to coerce any further changes to this cool paper, I stand by my recommendation concerning asymptotic notation. I can say more about the issues here. 1a) The tilde notation f(x) ~ g(x) is widely used in analysis, but with the incompatible meaning that f(x) = g(x)(1+o(1)). (The latter is how it would usually be written in computer science and sometimes in analysis as well.) Yes, you give a rigorous definition, but it feels a bit like redefining x = y to mean that floor(x) and floor(y) are equal. 1b) I am nonplussed by the comment that the O(f(x)) and \tilde{O}(f(x)) notation is more cumbersome, and the comment that both forms are asymmetric. Concerning the second point, both forms have symmetric versions, namely \Theta(f(x)) and \tilde{\Theta}(f(x)). Besides, although you can use that when needed, I don't see that you need it very often. The phrase "at most" in the statement of the main theorem appropriately modifies your notation to the one-sided form. You write "at most" in several other places in the paper, and you could have it in even more places. (Except that at many points it would be redundant with writing \tilde{O}(f(x)) instead.) 1c) If all else were equal, standard asymptotic notation would conceivably be very slightly more cumbersome than the (redefined) tilde. But much of the issue might just be getting used to what is standard, and all else is far from equal. Computer scientists are clearly part of the natural audience for this paper. The notation O(f(x)) (upper bound with a constant fudge factor), \Omega(f(x)) (lower bound), and \Theta(f(x)) (two-sided bound) is not only used in tens or hundreds of thousands of computer science papers, even computer science undergraduates are widely trained in it. The tilde extension with a polylog fudge factor is also well established, as an impromptu lower bound in more than a thousand papers. As I said, I won't go to the mat over this, but I don't know why the authors would want to either. 2) The more uniform use of the notation [a,b)_{\mathbb{Z}} seems helpful, but since you are going this route, more editing is merited, as follows: 2a) Section 2.1 begins with the old formulation using an interval (-1,2n) in \mathbb{R}. If it is important to define an unnumbered Gauss diagram with a real parameter, then I suppose that the interval should be a more general (a,b)? 2b) The subscript is not always blackboard bold as it should be; in some places you have just [a,b)_Z. 2c) The intervals-of-integers notation is later appropriately extended to open and closed intervals, but AFAIK only the half-open version is defined. I recommend a more careful definition at the beginning. 3) The problem with the n-bar notation in the statement of Theorem 3.1 is that it looks like you are exponentiating a number rather than a set. The notation [n] for the same thing is very standard in combinatorics, for instance in the textbook by Richard Stanley. 4) I think that the generalization of the main theorem in the very last paragraph should be interesting to more people than just me. Without particularly changing it, I think that it would make more sense in the introduction than on the last page. Space-time tradeoffs are a major topic in both theoretical and practical computer science. 5) I recommend adding the arXiv numbers to references [GPV00] and [BNBNHS23], respectively arXiv:math/9810073 and arXiv:2108.10923.