𝐏𝐫𝐢𝐦𝐨𝐫𝐢𝐚𝐥 (take a look!) to put aside!

 

(even though it did its job)

 

When something is completely stuck in the realm of theory, you don't notice many things that "come out in the wash", that is, when you start implementing ideas.

 

So, when creating an implementation of a rather vaguely outlined (this is an inevitable stage!) algorithm, it turned out that a big problem is that numbers have different lengths expressed by the number of digits.

 

As a side note: although we see them in decimal form, we should not forget that inside the computer they are binary.

 

And I keep saying that the decimal system should be abandoned in mathematics, as a scrofulous and clumsy creation, chosen completely at random, without establishing the appropriate criteria that a decent system should meet, appropriate for the tasks it faces in a given field.

 

It should be borne in mind that different number systems are possible, not only (hexa)decimal or binary, but also, for example, trigintal:

 

https://matematyka.pl/dyskusje-o-matematyce-f76/systemy-liczbowe-bin-oct-dec-hex-i-trigintalny-t456895.html?sid=1eaa4552816cef71f75f8bd307886f44

 

— in the base of this system there is (similarly to the decimal system) two and five, but additionally also three, and after all we divide by it more often than by five, and in the trigintal system 1/3 would be a finite fraction: would you agree that it is convenient?

 

Otherwise, in case of the need to search for small (well, rather only small) prime numbers: the most suitable would be the sextuple system: omitting two and three — only those ending with the digit 1 or 5 would be prime numbers (of course, not every number meeting this condition would be a prime number). This would help children at school to become interested in prime numbers, and after all, they are the basis of all mathematics in general...

 

The binary system absolutely rules inside the computer, which results from its physical construction, primarily from the fact that the basic unit of information is the bit, and this, as we know, can only take on two values, 0 or 1. However...

 

[ Here's a small → Digression ]

 

Revenons à nos moutons, or the length of numbers expressed in the number of digits they consist of:

 

Prime numbers are of course no exception in this respect, although they have their own specificity, which will be discussed later.

 

The algorithm (or rather "idea for an algorithm") predicted that the number being tested would be divided by the product of selected primes. At first I saw 𝐩𝐫𝐢𝐦𝐨𝐫𝐢𝐚𝐥 as a selector, but the fundamental flaw of the 𝐩𝐫𝐢𝐦𝐨𝐫𝐢𝐚𝐥 function quickly became obvious, namely that it grows so dramatically fast. Therefore, a need arose to create a new function, which I called Van, which is the result of division of 2 𝐩𝐫𝐢𝐦𝐨𝐫𝐢𝐚𝐥s. This function has to some extent solved a very important problem resulting from the fact that 𝐩𝐫𝐢𝐦𝐨𝐫𝐢𝐚𝐥 grows very, very fast: already in medium registers, even hundreds of times faster than the factorial function! And the further into the forest, the more trees: it grows not only fast, but this speed is still growing: we can say that the second derivative is also greater than zero, although the third derivative is (fortunately) negative.

 

However, this function (Van) was useful only at the conceptual stage, because the creation of a test implementation clearly showed that it is necessary (for the purpose of building subsequent products) to select prime numbers so that the resulting product maintains a more or less constant length on subsequent floors; a length expressed in the number of (binary) digits.

 

But this is not the only condition, the second is that each such product consists of as many components, i.e. prime numbers, as possible. Neither 𝐩𝐫𝐢𝐦𝐨𝐫𝐢𝐚𝐥 nor Van could handle this problem.

 

So another function had to be created that would do it in the best possible approximation, because it simply cannot be done "fully". I called this function (or rather PROCEDURE) nabla-deltorial (nDeltorial for short), it is denoted by the symbol delta, preceded by the symbol nabla: Δ(n).

 

The two triangles joined oppositely Δ — symbolize the procedure proposed here for EQUALIZING the resulting length of the product of several prime numbers, by means of an appropriate method of their profiled selection.

 

In principle, this function could be multi-argument, but that would only confuse things, so it will have only one argument, specifying the number of digits that make up the resulting number, which is the product of several primes chosen for the nDeltorial.

 

Now fasten your seatbelts, because things are about to get bumpy!

 

1. The primes used (to create the product we will need later) should not be multiplied sequentially, one after the other, but (as a general rule) in the order used in the Four Hills Skiing Competition, i.e. the initial, small numbers should be multiplied by exact shuffling with the large numbers from the opposite end of the scale. The first is multiplied with the last, then with the second, then the penultimate, and then the third from the end. And so on, and so on, and so on...

 

But! But this does not only apply to individual numbers, but to entire blocks of them, where each block is characterized by the fact that all the numbers contained in it have the same length, expressed in the number of digits. Of course, binary digits, but I probably do not need to remind attentive listeners of this...

 

And once again: numbers selected from blocks with a small number of digits will be multiplied alternately with numbers selected from a block with a large number of digits. Also on the principle of the first with the last, the second with the penultimate, and so on, and so on, and so forth...

 

If anyone fell asleep, let me remind you that we are still talking about multiplying prime numbers, which we need to create their product, which has a specific fingerprint...

 

But (a small thing!) - there are 9 times fewer numbers with n digits than numbers with n + 1, and 90 times fewer than numbers with n + 2, and 900 times fewer than numbers with n + 3 (of course in this scrofulous decimal system! Fortunately, in the binary system it is much better, because there are only twice as many (n+1)digit numbers as n-digit numbers...)

 

I am talking about natural numbers here, of course, because prime numbers are very capricious and such simplified rules do not fit them. However, some approximation is provided by...

 

You can unfasten your seatbelts, but definitely put on your helmets, because it gets even more interesting:

 

There is no point in the algorithm that we are designing here, which is supposed to work efficiently and quickly, performing in real time the operations described above, which are highly complex.

 

Therefore, it is necessary to relieve it of as many difficult tasks as possible, and this is done in such a way that tables of appropriate numbers are prepared in advance so that it does not have to look for them, but only sends an order to the warehouse, specifying what it needs using the nDeltorial function.

 

The result of the nDeltorial function used in this way would be to send a RANGE of numbers from the warehouse (attention, attention - it will be difficult, fasten your seatbelts!) with a length specified by the nDeltorial index. The point is to sampling without replacemen, i.e. such that during the execution of the algorithm no retrieved set is repeated a second time. If this were to happen again, there would be no disaster, except a waste of time, hence the requirement of “sampling without replacement”...

 

Here’s a digression, but still very much on topic (you can unfasten your seatbelts):

 

When I started working on creating the best possible algorithm for factoring large numbers into primes, I was fascinated by the idea of ​​doing it methodically, correctly, and (by implication) in order. The function 𝐩𝐫𝐢𝐦𝐨𝐫𝐢𝐚𝐥 tempted me with such hope, and I was convinced that it would be possible to establish a perfect order, something that had previously seemed downright impossible, due to the chaos of primes, whose appearance in subsequent positions is not subject to any (known) methods of prediction.

 

I have to say it openly and directly: I was wrong, very wrong! It is true that such an order as I imagined can be introduced, but at the expense of much more important elements, first of all computational complexity, and therefore the basic thing: efficiency of operation.

 

During attempts to implement an algorithm that has not yet been fully thought out, I clearly stated that it is impossible to avoid sampling numbers used for testing without losses, and we simply have to acknowledge and accept it!

 

Is that bad? Well, as it turns out, not at all: if it were otherwise, RSA encryption would be at risk! But fortunately, this is not the case, and cryptographers and cryptologists can sleep soundly, it will not be the case that clever computer programs will completely take away their work.

 

As was said, the test numbers with which we will test the tested number will inevitably have to be drawn, counting on the proverbial stroke of luck, or even a bag, or a miracle...

 

But the question arises, what positive result results from all my considerations, what can be done to make factoring programs work as quickly as possible?

 

I have already given the answer, but it was not written in capital letters, so now I will repeat it, indicating its essential meaning, although it was still present in the subtext of the considerations presented above:

 

- so instead of dividing the number under study successively by single prime numbers, we will divide it by composite numbers, which are the product of selected prime numbers. So we replace n division operations with a faster operation, i.e. multiplication, which we also perform n times, after which we perform just one division, by the product constructed in this way.

 

The gain is greater, the larger n is. Unfortunately, I must admit that this is not the end of the operations that need to be performed, because as a result of division we obtain the multiplier, as well as the remainder of the division, which is most interesting to us. Now we need to determine whether this remainder has any common divisors with the number p or the number b. Let me remind you of our basic equation, in which these numbers appear:

 

a : p = b•p +r

 

Let me also remind you of the fact mentioned earlier, that if there is a number that divides the right side of the equation (i.e. both of its components, where the product b•q is treated as a SINGLE component), then of course it also divides the left side, and it is precisely such a coincidence that we are looking for, counting on the lucky chance that the randomly chosen (I remind you!) number p has in some part the same fingerprint as the number a.

 

Where is the progress, how is this algorithm better than dividing by single primes to check whether the number under study has the right fingerprint?

 

The advantage is that in the case of cryptographic problems, the number under study is very special, because it is invariably the product of exactly two large primes, and additionally of similar size, which is supposed to make factorization difficult. In the case of asymmetry (especially if it is strong), we will come across a smaller component faster than when the components are of similar size. In a non-cryptographic, random number, such a relationship will be less than rare...

 

Well, the equation shown above allows us to transform a difficult problem into another, simpler one: because the remainder of the division will most often be much easier to factor than the number a. And you will probably agree that the possibility of dividing a difficult (although single) problem into a series (even quite long) of partial problems, definitely easier - must be a step in the right direction, although this is by no means an absolute rule...

 

It all depends on how many such partial problems we will have to divide a single difficult problem into, and this is already a matter of luck, namely that we will get the right one from the store of nDeltorials as fast as possible (AFAP).

 

We have to help luck, and in this case it is helped by the fact that nDeltorials should be maximally complex, i.e. built from the largest possible number of prime numbers that compose them. Thanks to this, our algorithm will work most efficiently! Dividing the tested by the binary nDeltorial — we perform the test for both primes it is built from in ONE step, so here we have 200% efficiency...

 

So it's time to look at our nDeltorials store. Let's start with an easier problem, because moving on to more difficult ones, we will forget about one seemingly trivial, but still important thing:

 

We will need nDeltorials with DIFFERENT index values, i.e. different lengths (binary, of course!).

 

Hence the need to create many substores, each of which will contain nDeltorials of a specific length. Well, but as I mentioned, it's a detail that an attentive listener would figure out sooner or later.

 

And now fasten your seatbelts, it will be difficult:

 

There is no universal function for creating such nDeltorials, I don't think it can be created.

 

This means that there are an infinite number of recipes for creating nDeltorials, and therefore an infinite number of combinations that can be represented by the methods of their creation. This means, among other things, that each nDeltorial will have a very random fingerprint, and the role of chance cannot be eliminated here, there is no mercy! But... there is also no such... NEED! After all, chance rules here anyway: what good is it if we eliminate it "in our own" and take test numbers in a very SYSTEMATIC way, since the ones being tested are completely RANDOM?

 

If it were about cryptography, i.e. creating ciphers, this would be an advantage. Unfortunately, we are talking about cryptology, or breaking codes, and at this point it is a disadvantage: if the adversary learns our database of nDeltorials, he will also know our Achilles heel, or rather heels, because we are talking about the plural. He will therefore be able to select cryptographic keys in such a way as to minimize the possibility of breaking them. Therefore, such a database must be secret, and additionally refreshed from time to time, i.e. created completely from scratch. Since creating each such database takes a lot of computational time, such a ready database can be relegated to less important applications, or simply sold on the open market, but of course only to trusted customers, excluding the possibility of further resale, so that it does not fall into the wrong hands.

 

That is almost the end, but it is worth adding a few more minor, but still quite important, remarks here:

 

1.      First of all, unfasten your seat belts, we will no longer need them.

2.      NOP…

3.      Also look at your neighbor, whether he didn't hold his breath by any chance, and didn't do it for so long that he lost consciousness: breathe, for God's sake: you can breathe now! (doctors often forget about this command, and the consequences can be truly tragic)

3. In terms of cryptography, using this method requires a very well-thought-out construction of nDeltorials, because we know very well that the number being tested will not have any low divisors, but only high ones.

 

4. But after a moment's reflection, it must be noted that the joy is premature, because it means that it makes sense to create only two-element nDeltorials, which means that the profit from using them becomes not very impressive (to put it mildly).

 

5. This opens up the field for a kind of arms race, meaning a change in the approach to the method of creating cryptographic keys: until now, the best solution was to create those consisting of very close numbers. Now this will be the wrong direction, because then we know which nDeltorials make sense and which we can give up. We simply take the square root of the number being tested, and as a result we obtain a number around which the two solutions of interest to us will oscillate.

 

6. Under the pressure of the method presented here, the approach will have to be changed, and keys will have to be made more asymmetric, which means that one of the components will be noticeably larger than the other, which increases susceptibility to attack using classical methods, but reduces susceptibility to attack using the method presented currently.

 

This is an interesting issue, but quite complex, so I will not discuss it here, but it is clear to me that people who know about it will understand perfectly well what it is about.

 

7. As has been said, nDeltorials with cryptological applications (i.e. for breaking codes) must necessarily be two-element, i.e. constitute the product of two large prime numbers.

 

8. But I wonder if there aren't more possibilities in the opposite approach, which just occurred to me:

 

9. So instead of dividing the tested number by a two-element nDeltorial, we could do something perverse, which would allow us to use more-element nDeltorials (in principle without any restrictions on their length, which is really groundbreaking!), except that we would do the division in reverse: the multi-element nDeltorial, whose fingerprint we know, should be divided by the tested number, and as a result we will get some remainder, which can be (and usually will be!) much easier to factor than the tested number.

 

And if we are lucky, we will find exactly one of the divisors of the tested number, but only and exclusively if the used nDeltorial also contains this divisor.

 

As you might expect, by factoring this remainder we will very likely get several small primes (which are of no importance to us) and exactly one of the primes that make up the number under consideration, which completely solves the problem of factoring it.

At this point, an idea came to my mind, I will have to think about it, but for now:

1. Anyone interested in historical elements can take a look here.

2. And this is a 'shrink' for this material:     tinyurl.com/nDeltorial

3. In addition, I am announcing that the lecture has just ended......

 

 

 ⁣© ⁣ ⁣ 𝑁𝑖𝑐𝑜𝑙𝑎𝑠 𝑅𝑎𝐵𝑎𝑟𝑏𝑎𝑟(𝑠)𝑘𝑖

       (𝐢𝐧 𝐋𝐯𝐢𝐯-𝐒𝐜𝐨𝐭𝐭𝐢𝐬𝐡

                 𝐂𝐨𝐨𝐤𝐁𝐨𝐨𝐤

                  prime floor:

𝐬𝐞𝐥𝐞𝐫, 𝐩𝐚𝐫𝐬𝐥𝐞𝐲 & 𝐩𝐨𝐨𝐫)