Analysis of Public key and Private Key Cryptography

What is cryptography?

We can define cryptography as a mathematical theory and computational application which make the data confidential over the internet, if I want to send a secure message to my friend called Qasim then, I will make some rules like to change every character shift by 5, so that means; I can convert A to F , B to G and so on. So this way no one can understand the message but the only person who can understand who know this rule. There are many ways to apply rules in cryptography and in the modern cryptography they are using a different mathematical formula and complex type of Algorithmic to make the message more secure and authenticate.

Type of Messages:

v Plaintext message:

o This type of message can be read and understand by everyone, in plaintext message there will not be apply any rules and mathematical calculation.

v Ciphertext message.

o Ciphertext message can’t be understand by everyone because this can be generating after applying any kind of rules or mathematical algorithms.

Processing of encryption and decryption message. [5]

Why Cryptography?

There are four major aspects in the cryptography to ensure the data will be safely sent over the public network, because over the public network anyone will be over the network and can be do anything and or trying to access others workstation. So over the public network we can’t hide our data because there is only way to send information to other people so what we do to change the data into unreadable information.

Today cryptography is become one of the main tools for privacy, access control, corporate security and electronic payment. The power of the cryptography can be depend on the mathematical function and formulas used in the encryption and decryption process. The mathematical function and algorithms works in a combination with a key, word, number or any phrase of to encrypt the plaintext. The same plaintext encrypts to different ciphertext with different keys. The security of encrypted data is entirely dependent on two things, first one is the strength of the cryptographic algorithm and the second one is secrecy of the key. With the help of these four characteristic of the cryptography we can assure that our data is safe. The sender and receiver must have agreement to each other to keep it secret and if they are in a different physical location then they make sure the medium to share the key.

This service restricts access to the content of sensitive data to only those individuals who are authorized to view the data. Confidentiality measures prevent the unauthorized disclosure of information to unauthorized individuals or processes.

v Confidentially:

o If I send message to anyone so no one else can read that message other than the required person. These services restrict access to the content of sensitive data to only those individuals who are authorized to view the data. It also measures prevent the unauthorized disclosure of information individuals.

v Integrity:

o This service addresses the unauthorized or accidental modification of data. This includes data insertion, deletion, and modification. To ensure data integrity, a system must be able to detect unauthorized data modification. The goal is for the receiver of the data to verify that the data has not been altered. Receiver knows that the message from sender has not been tampered with in transit.

v Authenticity:

o This service established the validity of a transmission or message. Receiver knows that only sender could have sent the message he has just received

v Non-repudiation:

o The goal is to ensure that the receipt of the data is assured of the sender identity. It is impossible for sender/receiver to turn around later and say like that he did not send the message.

Types of cryptography:

There are two type of the cryptography, these are opposite to each other and will be used in according to the environment needs and requirement of the encryption.

v Symmetric Encryption

v Asymmetric Encryption

Symmetric encryption:

Symmetric encryption is a type of encryption which used to be as both public and private key. The symmetric key has a shared secret between the sender and receiver of the any given information, so it’s also known as secret key. The main used of the symmetric key in security protocols to make a secure and confidential communication between the parties. The most popular protocols are internet protocol security and transport layer protocol. These protocols are responsible to track the secure session during the communications and if there are more then one communication is ongoing then its manage each session for each communication, in a whole communication its keeps session alive and secure but when the communications finish then the secure session also terminate.

Symmetric key encryption scheme is very much faster then the asymmetric encryption, so this is the reason to be widely used in a bulk encryption to persistent data, e.g. email messaging, newsletter and document files. The weakness of this type of cryptography to ensure the process of passing the key to the other party and the party should be agreed before passing the key.

Symmetric key encryption. [5]

Asymmetric Encryption:

This is a second type of encryption technique to be used to convert the cipher text to plain text (decryption) and vice versa. In this type of encryption or decryption both public key and private key are used. Asymmetric algorithms are poorly suited for encrypting large messages because they are relatively slow. Instead, these algorithms are used to achieve authentication, integrity and non-repudiation, and support confidentiality through key management. Asymmetric algorithms are used to perform three major operations explained here.

v Digital Signature:

o Before sending the message, sender can generate digital signature for a message using message digest and a private key. Similarly on the receiving side, receiver can generate the message digest and also use the sender public key to validate the signature. Digital signature can be used to authenticate the system or application, on the same way it also have a ability to verify the data integrity if the data has been changed after since the signature was applied.

v Key Agreement:

o There are some algorithms can support to the key agreement. in this kind of scenario if the sender have its own private and receiver public key and on the other side receiver have its private key and sender public key then they can apply some mathematical algorithms to generate the same value on both side. But if there is someone in between of them and may have a public key but he could not generate the correct value.

v Key Transport:

o Some asymmetric algorithms can be used to encrypt and decrypt data. In practice these algorithms are never used to encrypt large amounts of data, because they are much slower than symmetric key algorithms. However, these algorithms are perfectly suited to encrypting small amounts of data. This operation is called key transport or key exchange, and is used in many protocols.

In a classic cryptosystem, we have encryption functions E_K and decryption functions D_K such that D_K(E_K(P)) = P for any plaintext P. In a public-key cryptosystem, E_K can be easily computed from some who have a public key called X which in turn is computed from K. X is published, so that anyone can encrypt messages. If decryption D_K cannot be easily computed from public key X without knowledge of private key K, but readily with knowledge of K, then only the person who generated K can decrypt messages. That’s the essence of public-key cryptography.

In earlier years there was a major problem to sharing a key to the person called key distribution problem. In 1970 there was a British secret service adopt some way so solve this problem but they didn’t disclose this for public users so later in 1975 Martin Hellman introduced a mechanism for key distribution problem. The public key can be having many people and with the help of public key they can only encrypt data nothing more than that. It’s a good for those people they don’t have any secure system to make the information secure so they just use the public key encrypt it and send it to the receiver and on the receiver end they must have a private key to decrypt message because this is the only way to read the data in readable form (plaintext format). There are some famous public keys which is used, e.g. RSA, DSA and Elgamal.

Key encryption [5]

In a public key encryption each person has a public ciphering key and private key ciphering. So then later public key can be available for everyone but on the other hand private will not be available for anyone, its remain secret.

Public key tends to be slow then the private key cryptography, so usually it’s used in the auxiliary capacity. So now the one important thing when the user needs the public key encryption? We can say if there are two persons or parties who never have had a contact before and even they don’t have established any prior trust to each other. On the other hand there is one strength of the public key to solve the key passing problem.

Encryption Algorithmic:

Cryptography algorithms are crucial security tools for guaranteeing secrecy of sensitive data. Moreover, their area of application is widely spread and growing. Consequently, in domains where security is a major issue, as in electronic commerce or electronic voting, the need of confidence in correct implementations is increasing dramatically. Leaks in security, for example in a communication between banks, could be very costly and should be prevented. Consequently, the use of correct implementations of cryptographic algorithms is essential for security. Therefore, implementing cryptographic algorithms must be done with extreme care. A verification of these algorithms with reliable methods is a central objective. A verification of an algorithm can prove crucial properties or in the optimal case all relevant facts needed for its application. Verification approaches for cryptographic primitives have been directed in two distinct directions, first one is computational approach relying on the vocabulary of probability theory and second is complexity theory and the formal approach based on ideas and techniques from logic and programming languages. A proof of functional correctness implementation can be achieved by formal verification. But, in the area of cryptography the used algorithms are often very complex, hence formal verification is a big challenge and therefore only achieved with much effort.

Rabin public-key scheme:

In 1979 Michael Rabin introduced in his well known paper a new class of public-key functions, where a number n = p · q with large primes p and q is involved. The number n is the public key; moreover p and q are the private keys that can be used for signing and decryption. A main result can be generating for any given number n the efficient inversion of the function y = En(x) that is described below, for even a small percentage of the values of y implies the efficient factorization of n. One of the public key function is given below.

For a given number n = p · q that is a product of two large prime numbers p, q, and for given b, 0 ≤ b

En,b(x) ≡ x(x + b) mod n,

where En,b(x)

International Data Encryption Algorithmic:  
This has been started in early 1990. The basic idea behind this to operate on 64bits blocks and its uses 128bit key encryption. This has been mixed operation from different algebraic                group to adding the more security. The major operations are XOR, multiplication module and addition module. This is not like a DES to de a bit level manipulation and also this uses a same algorithmic for encryption and decryption. IDEA takes a 64-bit block of plaintext and then divided into 4 sub-block of 16-bit block. It takes a 128 bit key and expand it to generation a 52 sub keys of 16-bit length. In a simple way we can say its takes an eight block of 16-bit key in each round and perform a circular left shifting of the original 128-bit key by 25 bits. At the end of each round this has been generated 52 sub-key in total. 

RSA Encryption:

It is very simply to multiply numbers together, especially with computers. But it can be very difficult to factor numbers. For example, if I ask you to multiply together 34537 and 99991, it is a simple matter to punch those numbers into a calculator and 3453389167. But the reverse problem is much harder. Suppose I give you the number 1459160519. I’ll even tell you that I got it by multi-plying together two integers. Can you tell me what they are? This is a very difficult problem. A computer can factor that number fairly quickly, but it basically does it by trying most of the possible combinations. For any size number, the computer has to check something that is of the order of the size of the square-root of the number to be factored. In this case, that square-root is roughly 38000.

Now it doesn’t take a computer long to try out 38000 possibilities, but what if the number to be factored are not ten digits, but rather 400 digits? The square-root of a number with 400 digits is a number with 200 digits. The lifetime of the universe is approximately 1018 seconds an 18 digit number. Assuming a computer could test one million factorizations per second, in the lifetime of the universe it could check 1024 possibilities. But for a 400 digit product, there are possibilities. This means the computer would have to run for 10176 times the life of the universe to factor the large number. It is, however, not too hard to check to see if a number is prime. In other words to see that it cannot be factored. If it is not prime, it is difficult to factor, but if it is prime, it is not hard to show it is prime. So RSA encryption works like this. I will and two huge prime numbers, p and q that have 100 or maybe 200 digits each. I will keep those two numbers secret, and I will multiply them together to make a number N = pq. That number N is basically my public key. It is relatively easy for me to get N; I just need to multiply my two numbers. But if you know N, it is basically impossible for you to find p and q. To get them, you need to factor N, which seems to be an incredibly difficult problem.

But exactly how is N used to encode a message, and how are p and q used to decode it? Below is presented a complete example, but I will use tiny prime numbers so it is easy to follow the arithmetic. In a real RSA encryption system, keep in mind that the prime numbers are huge. In the following example, suppose that person A wants to make a public key, and that person B wants to use that key to send A a message. In this example, we will suppose that the message A sends to B is just a number. We assume that A and B have agreed on a method to encode text as numbers. Here are the steps:

  • Person A selects two prime numbers. We will use p = 23 and q = 41 for this example, but keep in mind that the real numbers person A should use should be much larger.
  • Person A multiplies p and q together to get pq = (23)(41) = 943. This become a public key, which he tells to person B.
  • Person A also chooses another number e which must be relatively prime to (p – 1)

(q – 1). In this case, (p – 1)(q – 1) = (22)(40) = 880, so e = 7 is fine. e is also part of the public key, so B also is told the value of e.

DES Encryption:

In 1972, the National Institute of Standards and Technology decided that a strong cryptographic algorithm was needed to protect non-classified information. The algorithm was required to be cheap, widely available, and very secure. NIST envisioned something that would be available to the general public and could be used in a wide variety of applications. So they asked for public proposals for such an algorithm. In 1974 IBM submit the Lucifer algorithm, which appeared to meet most of NIST’s design requirements.

NIST enlisted the help of the National Security Agency to evaluate the security of Lucifer. At the time many people distrusted the NSA due to their extremely secretive activities, so there was initially a certain degree of skepticism regarding the analysis of Lucifer. One of the greatest worries was that the key length, originally 128 bits, was reduced to just 56 bits, weakening it significantly. The NSA was also accused of changing the algorithm to plant a “back door” in it that would allow agents to decrypt any information without having to know the encryption key. But these fears proved unjustified and no such back door has ever been found.

The modified Lucifer algorithm was adopted by NIST as a federal standard in 1976. Its name was changed to the Data Encryption Standard (DES). The algorithm specification was published in January 1977, and with the official backing of the government it became a very widely employed algorithm in a short amount of time.

Unfortunately, over time various shortcut attacks were found that could significantly reduce the amount of time needed to find a DES key by brute force. And as computers became progressively faster and more powerful, it was recognized that a 56-bit key was simply not large enough for high security applications. As a result of these serious flaws, NIST abandoned their official endorsement of DES in 1997 and began work on a replacement, to be called the Advanced Encryption Standard (AES). Despite the growing concerns about its vulnerability, DES is still widely used by financial services and other industries worldwide to protect sensitive on-line applications.

To highlight the need for stronger security than a 56-bit key can offer, RSA Data Security has been sponsoring a series of DES cracking contests since early 1997. In 1998 the Electronic Frontier Foundation won the RSA DES Challenge II-2 contest by breaking DES in less than 3 days. EFF used a specially developed computer called the DES Cracker, which was developed for under $250,000. The encryption chip that powered the DES Cracker was capable of processing 88 billion keys per second. More recently, in early 1999, Distributed. Net used the DES Cracker and a worldwide network of nearly 100,000 PCs to win the RSA DES Challenge III in a record breaking 22 hours and 15 minutes. The DES Cracker and PCs combined were testing 245 billion keys per second when the correct key was found. In addition, it has been shown that for a cost of one million dollars a dedicated hardware device can be built that can search all possible DES keys in about 3.5 hours. This just serves to illustrate that any organization with moderate resources can break through DES with very little effort these days. [12]

Public key Problems:

There are few major problems in the public key cryptography. The first problem is key exchange problem which ensure the keys are exchanged so that the sender and receiver can perform encryption and decryption and doing so in such a way that ensures an eavesdropper or outside party cannot break the code. Another problem is Authentication which can be added in the requirement to assure to the receiver that a message was encrypted by any given entity and not be someone else.

The private key cryptography can be help to solve all the constraints because this is the only simple and least method are used. In this scheme only the sender will have the key capable of encryption sensible messages delivered to the receiver. On the same way public key cryptography methods help to solve a critical scenario of the key exchange problem specifically their resistance to analysis even with the presence a passive eavesdropper during exchange of keys, however they do not solve all problems which associated with key exchange problem. In particular, since the keys are considered public knowledge some other mechanism must be developed to testify to authenticity, because possession of keys alone is no evidence of a particular unique identity of the sender.

Key Distribution:

This can be possible to construct an ‘unbreakable’ cipher using randomness. This is the one-time pad, whose key is a string of characters as long as the message. One weakness of all the ciphers we have studied so far is the problem of key distribution. If Eve can get hold of the key, then she can decrypt the cipher. On the other hand, Alice and Bob must both know the key, or they cannot communicate. So they must share the key by some secure method which Eve cannot penetrate. In the classical field of intelligence, a spy is given the key (which might be one copy of the one-time pad, the other copy being held by the home agency) before being sent out into the field. Since the key must not be re-used, the spy can only send as much information as the key he possesses. Then he must return to base for a new one-time pad. This system can work well, if the spy keeps the pad on his person and destroys each page when it is used. Of course, having a one-time pad on your person might be extremely dangerous! Other ciphers use a key which is smaller than the message. For example, a military commander might be issued with a set of keys, and instructed to use a new key every month according to some schedule. But if the enemy captures the keys, then all communications can be read until the whole set of keys is changed; this change may be difficult in wartime.

The commercial use of cryptography since the Second World War introduced new problems. Commercial organisations need to exchange secure communications; the only way of exchanging keys seemed to be by using trusted couriers. The amount of courier traffic began to grow out of control. It was the invention of public-key cryptography which gave us a way round the key distribution problem.

That there is a possible way around the problem is suggested by the following story. Alice and Bob wish to communicate by post, but they know that Eve’s agents have control of the postal service, and any letter they send will be opened and read unless it is securely fastened. Alice can put a letter in a box, padlock the box, and send it to Bob; but Bob will be unable to open the chest unless he already has a copy of Alice’s key.

The solution is as follows. Alice puts her letter in the box, padlocks it and sends it to Bob. Now Bob cannot open the box. Instead, he puts his own padlock on the box and sends it back to Alice. Now Alice removes her padlock and returns the box to Bob, who then simply has to remove his own padlock and open the box. A little more formally, let Alice’s encryption and decryption functions be eA and dA, and let Bob’s be eB and dB. This means that Alice encrypts the plaintext p as eA(p); she can also decrypt this to p, which means that dA(eA(p) = p.

Now Alice wants to send the plaintext p to Bob by the above scheme. She first encrypts it as eA(p) and sends it to Bob. He encrypts it as eB(eA(p)) and returns it to Alice. Now we have to make a crucial assumption: eA and eB commute, that is, eA _ eB = eB _eA. Now Alice has (eB _eA)(p), which is equal to eA _ eB(p) = eA(eB(p)) according to our assumption. Alice can now decrypt this to give dA(eA(eB(p))) = eB(p) and send this to Bob, who then calculates dB(eB(p)) = p. At no time during the transaction is any unencrypted message transmitted or any key exchanged. Note that the operations of putting two padlocks onto a chest do indeed commute.

The method would not work if, instead, Bob put the box inside another box and locked the outer box; the operations don’t commute in this case. If the letter that Alice sends to Bob is the key to a cipher (say a one-time pad), then Alice and Bob can now use this cipher in the usual way to communicate safely, without the need for the to-and-fro originally required. The system only depends on the security of the ciphers used by Alice and Bob for the exchange, and the fact that they commute.

Now if Alice and Bob use binary one-time pads for the key exchange, then these conditions are satisfied, since binary addition is a commutative operation. However, further thought shows that this is not a solution at all! Suppose that Alice wants to send the string l securely to Bob (perhaps for later use as a one-time pad). She encrypts it as l- kA, where kA is a random key chosen by Alice and known to nobody else. Bob re- encrypts this as (l _kA)_kB, where kB is a random key chosen by Bob and known to nobody else. Now (l _kA)_kB = l _kB)_kA, so when Alice re-encrypts this message with kA she obtains

((l_kB)_kA)_kA = (l_kB_(kA_kA) = l_kB

When Bob finally re-encrypts this with kB he obtains (l_kB)_kB = l: This is the exact analogue of the chest with two keys. If Eve only intercepts one of these three transmissions, it is impossible for her to read the message, since each is securely encrypted with a one-time pad. However, we must assume that Eve will intercept all three transmissions. Now if she simply adds all three together mod 2, she obtains

(l_kA)_(l_kA_kB)_(l_kB) = l;

And finally she has same message.

Elliptic curves:

Elliptic curves are used in public key cryptography – to send encrypted messages between computers. Now the software developers have their choice; they can use any elliptic curve, over any field. Adding two elements in such a field is merely an xor operation on the bits. Naturally, these have become the preferred fields. Since the tangent formula has a 2 in the denominator, and 2 = 0 in such a field, something has to give. The formula for an elliptic curve changed. There are still two parameters a and b, but the cubic equation is different.

y2 + xy = x3 + ax2 + b

ECC offers considerably greater security for a given key size. The smaller key size also makes possible much more compact implementations for a given level of security, which means faster cryptographic operations, running on smaller chips or more compact software. This means less heat production and less power consumption. All of which is of particular advantage in constrained devices, but of some advantage anywhere. There are extremely efficient, compact hardware implementations available for ECC exponentiation operations, offering potential reductions in implementation footprint even beyond those due to the smaller key length alone. [9]

Chart comparison of the cryptography:





















Key Transport






Asymmetric Cryptography

Digital Signature






Key Transport






Key Agreement







[1] _tenv.mspx?mfr=true





[6] W. Diffie, M.E. Hellman, “New Directions in Cryptography,” IEEE Transactions on

Information Theory, v. IT-22, n. 6, (Nov 1976),


[8] Computer Proven Correctness of the Rabin Public-Key Scheme by Johannes Buchmann, Markus Kaiser.


[10] Applied Cryptography by Bruce Schneier; 2nd Edition

[11] Cryptography and network Security by William Stallings; 4th Edition.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s