mardi 13 janvier 2015

Why does dd from /dev/urandom stops early? [duplicate]



This question already has an answer here:




On my current linux system (debian jessie amd64), I am getting different behavior for dd using /dev/urandom (/dev/random behavior is properly documented). If I naively want 1G of random data:



$ dd if=/dev/urandom of=random.raw bs=1G count=1
0+1 records in
0+1 records out
33554431 bytes (34 MB) copied, 2.2481 s, 14.9 MB/s
$ echo $?
0


In this case only 34MB of random data are stored, while if I use mutiple reads:



$ dd if=/dev/urandom of=random.raw bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 70.4749 s, 14.9 MB/s


then I properly get my 1G of random data.


The documentation for urandom is rather elusive:



A read from the /dev/urandom device will not block waiting for more entropy. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current unclassified literature, but it is theoretically possible that such an attack may exist. If this is a concern in your application, use /dev/random instead.



I guess the documentation implies there is some sot of maximum read size for urandom.


I am also guessing that the size of the entropy pool is 34MB on my system, which would explain why the first read of 1G failed at about 34MB.


But my question is how do I know the size of my entropy pool ? Or is dd stop by another factor (some kind of timing issue associated with urandom ?).



Aucun commentaire:

Enregistrer un commentaire