[go: nahoru, domu]

History log of /drivers/char/random.c
Revision Date Author Comments
d4c5efdb97773f59a2b711754ca0953f24516739 27-Aug-2014 Daniel Borkmann <dborkman@redhat.com> random: add and use memzero_explicit() for clearing data

zatimend has reported that in his environment (3.16/gcc4.8.3/corei7)
memset() calls which clear out sensitive data in extract_{buf,entropy,
entropy_user}() in random driver are being optimized away by gcc.

Add a helper memzero_explicit() (similarly as explicit_bzero() variants)
that can be used in such cases where a variable with sensitive data is
being cleared out in the end. Other use cases might also be in crypto
code. [ I have put this into lib/string.c though, as it's always built-in
and doesn't need any dependencies then. ]

Fixes kernel bugzilla: 82041

Reported-by: zatimend@hotmail.co.uk
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
1b2a1a7e8ad1144dc3f676f2651cb84e01548d59 17-Aug-2014 Christoph Lameter <cl@linux.com> drivers/char/random: Replace __get_cpu_var uses

A single case of using __get_cpu_var for address calculation.

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
48d6be955a7167b0d0e025ae6c39e795e3544499 17-Jul-2014 Theodore Ts'o <tytso@mit.edu> random: limit the contribution of the hw rng to at most half

For people who don't trust a hardware RNG which can not be audited,
the changes to add support for RDSEED can be troubling since 97% or
more of the entropy will be contributed from the in-CPU hardware RNG.

We now have a in-kernel khwrngd, so for those people who do want to
implicitly trust the CPU-based system, we could create an arch-rng
hw_random driver, and allow khwrng refill the entropy pool. This
allows system administrator whether or not they trust the CPU (I
assume the NSA will trust RDRAND/RDSEED implicitly :-), and if so,
what level of entropy derating they want to use.

The reason why this is a really good idea is that if different people
use different levels of entropy derating, it will make it much more
difficult to design a backdoor'ed hwrng that can be generally
exploited in terms of the output of /dev/random when different attack
targets are using differing levels of entropy derating.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
c6e9d6f38894798696f23c8084ca7edbf16ee895 17-Jul-2014 Theodore Ts'o <tytso@mit.edu> random: introduce getrandom(2) system call

The getrandom(2) system call was requested by the LibreSSL Portable
developers. It is analoguous to the getentropy(2) system call in
OpenBSD.

The rationale of this system call is to provide resiliance against
file descriptor exhaustion attacks, where the attacker consumes all
available file descriptors, forcing the use of the fallback code where
/dev/[u]random is not available. Since the fallback code is often not
well-tested, it is better to eliminate this potential failure mode
entirely.

The other feature provided by this new system call is the ability to
request randomness from the /dev/urandom entropy pool, but to block
until at least 128 bits of entropy has been accumulated in the
/dev/urandom entropy pool. Historically, the emphasis in the
/dev/urandom development has been to ensure that urandom pool is
initialized as quickly as possible after system boot, and preferably
before the init scripts start execution.

This is because changing /dev/urandom reads to block represents an
interface change that could potentially break userspace which is not
acceptable. In practice, on most x86 desktop and server systems, in
general the entropy pool can be initialized before it is needed (and
in modern kernels, we will printk a warning message if not). However,
on an embedded system, this may not be the case. And so with this new
interface, we can provide the functionality of blocking until the
urandom pool has been initialized. Any userspace program which uses
this new functionality must take care to assure that if it is used
during the boot process, that it will not cause the init scripts or
other portions of the system startup to hang indefinitely.

SYNOPSIS
#include <linux/random.h>

int getrandom(void *buf, size_t buflen, unsigned int flags);

DESCRIPTION
The system call getrandom() fills the buffer pointed to by buf
with up to buflen random bytes which can be used to seed user
space random number generators (i.e., DRBG's) or for other
cryptographic uses. It should not be used for Monte Carlo
simulations or other programs/algorithms which are doing
probabilistic sampling.

If the GRND_RANDOM flags bit is set, then draw from the
/dev/random pool instead of the /dev/urandom pool. The
/dev/random pool is limited based on the entropy that can be
obtained from environmental noise, so if there is insufficient
entropy, the requested number of bytes may not be returned.
If there is no entropy available at all, getrandom(2) will
either block, or return an error with errno set to EAGAIN if
the GRND_NONBLOCK bit is set in flags.

If the GRND_RANDOM bit is not set, then the /dev/urandom pool
will be used. Unlike using read(2) to fetch data from
/dev/urandom, if the urandom pool has not been sufficiently
initialized, getrandom(2) will block (or return -1 with the
errno set to EAGAIN if the GRND_NONBLOCK bit is set in flags).

The getentropy(2) system call in OpenBSD can be emulated using
the following function:

int getentropy(void *buf, size_t buflen)
{
int ret;

if (buflen > 256)
goto failure;
ret = getrandom(buf, buflen, 0);
if (ret < 0)
return ret;
if (ret == buflen)
return 0;
failure:
errno = EIO;
return -1;
}

RETURN VALUE
On success, the number of bytes that was filled in the buf is
returned. This may not be all the bytes requested by the
caller via buflen if insufficient entropy was present in the
/dev/random pool, or if the system call was interrupted by a
signal.

On error, -1 is returned, and errno is set appropriately.

ERRORS
EINVAL An invalid flag was passed to getrandom(2)

EFAULT buf is outside the accessible address space.

EAGAIN The requested entropy was not available, and
getentropy(2) would have blocked if the
GRND_NONBLOCK flag was not set.

EINTR While blocked waiting for entropy, the call was
interrupted by a signal handler; see the description
of how interrupted read(2) calls on "slow" devices
are handled with and without the SA_RESTART flag
in the signal(7) man page.

NOTES
For small requests (buflen <= 256) getrandom(2) will not
return EINTR when reading from the urandom pool once the
entropy pool has been initialized, and it will return all of
the bytes that have been requested. This is the recommended
way to use getrandom(2), and is designed for compatibility
with OpenBSD's getentropy() system call.

However, if you are using GRND_RANDOM, then getrandom(2) may
block until the entropy accounting determines that sufficient
environmental noise has been gathered such that getrandom(2)
will be operating as a NRBG instead of a DRBG for those people
who are working in the NIST SP 800-90 regime. Since it may
block for a long time, these guarantees do *not* apply. The
user may want to interrupt a hanging process using a signal,
so blocking until all of the requested bytes are returned
would be unfriendly.

For this reason, the user of getrandom(2) MUST always check
the return value, in case it returns some error, or if fewer
bytes than requested was returned. In the case of
!GRND_RANDOM and small request, the latter should never
happen, but the careful userspace code (and all crypto code
should be careful) should check for this anyway!

Finally, unless you are doing long-term key generation (and
perhaps not even then), you probably shouldn't be using
GRND_RANDOM. The cryptographic algorithms used for
/dev/urandom are quite conservative, and so should be
sufficient for all purposes. The disadvantage of GRND_RANDOM
is that it can block, and the increased complexity required to
deal with partially fulfilled getrandom(2) requests.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Zach Brown <zab@zabbo.net>
79a8468747c5f95ed3d5ce8376a3e82e0c5857fc 18-Jul-2014 Hannes Frederic Sowa <hannes@stressinduktion.org> random: check for increase of entropy_count because of signed conversion

The expression entropy_count -= ibytes << (ENTROPY_SHIFT + 3) could
actually increase entropy_count if during assignment of the unsigned
expression on the RHS (mind the -=) we reduce the value modulo
2^width(int) and assign it to entropy_count. Trinity found this.

[ Commit modified by tytso to add an additional safety check for a
negative entropy_count -- which should never happen, and to also add
an additional paranoia check to prevent overly large count values to
be passed into urandom_read(). ]

Reported-by: Dave Jones <davej@redhat.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
ee3e00e9e7101c80a2ff2d5672d4b486bf001b88 15-Jun-2014 Theodore Ts'o <tytso@mit.edu> random: use registers from interrupted code for CPU's w/o a cycle counter

For CPU's that don't have a cycle counter, or something equivalent
which can be used for random_get_entropy(), random_get_entropy() will
always return 0. In that case, substitute with the saved interrupt
registers to add a bit more unpredictability.

Some folks have suggested hashing all of the registers
unconditionally, but this would increase the overhead of
add_interrupt_randomness() by at least an order of magnitude, and this
would very likely be unacceptable.

The changes in this commit have been benchmarked as mostly unaffecting
the overhead of add_interrupt_randomness() if the entropy counter is
present, and doubling the overhead if it is not present.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: Jörn Engel <joern@logfs.org>
c84dbf61a7b322188d2a7fddc0cc6317ac6713e2 15-Jun-2014 Torsten Duwe <duwe@lst.de> random: add_hwgenerator_randomness() for feeding entropy from devices

This patch adds an interface to the random pool for feeding entropy
in-kernel.

Signed-off-by: Torsten Duwe <duwe@suse.de>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Acked-by: H. Peter Anvin <hpa@zytor.com>
43759d4f429c8d55fd56f863542e20f4e6e8f589 15-Jun-2014 Theodore Ts'o <tytso@mit.edu> random: use an improved fast_mix() function

Use more efficient fast_mix() function. Thanks to George Spelvin for
doing the leg work to find a more efficient mixing function.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: George Spelvin <linux@horizon.com>
840f95077ffd640df9c74ad9796fa094a5c8075a 14-Jun-2014 Theodore Ts'o <tytso@mit.edu> random: clean up interrupt entropy accounting for archs w/o cycle counters

For architectures that don't have cycle counters, the algorithm for
deciding when to avoid giving entropy credit due to back-to-back timer
interrupts didn't make any sense, since we were checking every 64
interrupts. Change it so that we only give an entropy credit if the
majority of the interrupts are not based on the timer.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: George Spelvin <linux@horizon.com>
cff850312cc7c0e0b9fe8b573687812dea232031 11-Jun-2014 Theodore Ts'o <tytso@mit.edu> random: only update the last_pulled time if we actually transferred entropy

In xfer_secondary_pull(), check to make sure we need to pull from the
secondary pool before checking and potentially updating the
last_pulled time.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: George Spelvin <linux@horizon.com>
85608f8e16c28f818f6bb9918958d231afa8bec2 11-Jun-2014 Theodore Ts'o <tytso@mit.edu> random: remove unneeded hash of a portion of the entropy pool

We previously extracted a portion of the entropy pool in
mix_pool_bytes() and hashed it in to avoid racing CPU's from returning
duplicate random values. Now that we are using a spinlock to prevent
this from happening, this is no longer necessary. So remove it, to
simplify the code a bit.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: George Spelvin <linux@horizon.com>
91fcb532efe366d79b93a3c8c368b9dca6176a55 11-Jun-2014 Theodore Ts'o <tytso@mit.edu> random: always update the entropy pool under the spinlock

Instead of using lockless techniques introduced in commit
902c098a3663, use spin_trylock to try to grab entropy pool's lock. If
we can't get the lock, then just try again on the next interrupt.

Based on discussions with George Spelvin.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: George Spelvin <linux@horizon.com>
e33ba5fa7afce1a9f159704121d4e4d110df8185 16-Jun-2014 Theodore Ts'o <tytso@mit.edu> random: fix nasty entropy accounting bug

Commit 0fb7a01af5b0 "random: simplify accounting code", introduced in
v3.15, has a very nasty accounting problem when the entropy pool has
has fewer bytes of entropy than the number of requested reserved
bytes. In that case, "have_bytes - reserved" goes negative, and since
size_t is unsigned, the expression:

ibytes = min_t(size_t, ibytes, have_bytes - reserved);

... does not do the right thing. This is rather bad, because it
defeats the catastrophic reseeding feature in the
xfer_secondary_pool() path.

It also can cause the "BUG: spinlock trylock failure on UP" for some
kernel configurations when prandom_reseed() calls get_random_bytes()
in the early init, since when the entropy count gets corrupted,
credit_entropy_bits() erroneously believes that the nonblocking pool
has been fully initialized (when in fact it is not), and so it calls
prandom_reseed(true) recursively leading to the spinlock BUG.

The logic is *not* the same it was originally, but in the cases where
it matters, the behavior is the same, and the resulting code is
hopefully easier to read and understand.

Fixes: 0fb7a01af5b0 "random: simplify accounting code"
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: Greg Price <price@mit.edu>
Cc: stable@vger.kernel.org #v3.15
5eb10d912ea0add5c15b0df1920ddd1681f0c9fb 06-Jun-2014 Joe Perches <joe@perches.com> random: convert use of typedef ctl_table to struct ctl_table

This typedef is unnecessary and should just be removed.

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
f9c6d4987b23e0a514464bae6771933a48e4cd01 17-May-2014 Theodore Ts'o <tytso@mit.edu> random: fix BUG_ON caused by accounting simplification

Commit ee1de406ba6eb1 ("random: simplify accounting logic") simplified
things too much, in that it allows the following to trigger an
overflow that results in a BUG_ON crash:

dd if=/dev/urandom of=/dev/zero bs=67108707 count=1

Thanks to Peter Zihlstra for discovering the crash, and Hannes
Frederic for analyizing the root cause.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Reported-by: Peter Zijlstra <peterz@infradead.org>
Reported-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Cc: Greg Price <price@mit.edu>
bdcfa3e57c9d92b082d2378bc9a64a3a8750fa8d 25-Apr-2014 Christoph Hellwig <hch@infradead.org> random: export add_disk_randomness

This will be needed for pending changes to the scsi midlayer that now
calls lower level block APIs, as well as any blk-mq driver that wants to
contribute to the random pool.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Jens Axboe <axboe@fb.com>
7b878d4b48c4e04b936918bb83836a107ba453b3 18-Mar-2014 H. Peter Anvin <hpa@linux.intel.com> random: Add arch_has_random[_seed]()

Add predicate functions for having arch_get_random[_seed]*(). The
only current use is to avoid the loop in arch_random_refill() when
arch_get_random_seed_long() is unavailable.

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
331c6490c7f10dcf263712e313b7c0bc7fb6d77a 18-Mar-2014 H. Peter Anvin <hpa@linux.intel.com> random: If we have arch_get_random_seed*(), try it before blocking

If we have arch_get_random_seed*(), try to use it for emergency refill
of the entropy pool before giving up and blocking on /dev/random. It
may or may not work in the moment, but if it does work, it will give
the user better service than blocking will.

Reviewed-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
83664a6928a420b5ccfc0cf23ddbfe3634fea271 18-Mar-2014 H. Peter Anvin <hpa@linux.intel.com> random: Use arch_get_random_seed*() at init time and once a second

Use arch_get_random_seed*() in two places in the Linux random
driver (drivers/char/random.c):

1. During entropy pool initialization, use RDSEED in favor of RDRAND,
with a fallback to the latter. Entropy exhaustion is unlikely to
happen there on physical hardware as the machine is single-threaded
at that point, but could happen in a virtual machine. In that
case, the fallback to RDRAND will still provide more than adequate
entropy pool initialization.

2. Once a second, issue RDSEED and, if successful, feed it to the
entropy pool. To ensure an extra layer of security, only credit
half the entropy just in case.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
46884442fc5bb81a896f7245bd850fde9b435509 18-Dec-2013 Theodore Ts'o <tytso@mit.edu> random: use the architectural HWRNG for the SHA's IV in extract_buf()

To help assuage the fears of those who think the NSA can introduce a
massive hack into the instruction decode and out of order execution
engine in the CPU without hundreds of Intel engineers knowing about
it (only one of which woud need to have the conscience and courage of
Edward Snowden to spill the beans to the public), use the HWRNG to
initialize the SHA starting value, instead of xor'ing it in
afterwards.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2132a96f66b6b4d865113e7d4cb56d5f7c6e3cdf 07-Dec-2013 Greg Price <price@MIT.EDU> random: clarify bits/bytes in wakeup thresholds

These are a recurring cause of confusion, so rename them to
hopefully be clearer.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
7d1b08c40c4f02c119476b29eca9bbc8d98d2a83 07-Dec-2013 Greg Price <price@MIT.EDU> random: entropy_bytes is actually bits

The variable 'entropy_bytes' is set from an expression that actually
counts bits. Fortunately it's also only compared to values that also
count bits. Rename it accordingly.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
0fb7a01af5b0cbe5bf365891fc4d186f2caa23f7 06-Dec-2013 Greg Price <price@MIT.EDU> random: simplify accounting code

With this we handle "reserved" in just one place. As a bonus the
code becomes less nested, and the "wakeup_write" flag variable
becomes unnecessary. The variable "flags" was already unused.

This code behaves identically to the previous version except in
two pathological cases that don't occur. If the argument "nbytes"
is already less than "min", then we didn't previously enforce
"min". If r->limit is false while "reserved" is nonzero, then we
previously applied "reserved" in checking whether we had enough
bits, even though we don't apply it to actually limit how many we
take. The callers of account() never exercise either of these cases.

Before the previous commit, it was possible for "nbytes" to be less
than "min" if userspace chose a pathological configuration, but no
longer.

Cc: Jiri Kosina <jkosina@suse.cz>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
8c2aa3390ebb59cba4495a56557b70ad0575eef5 06-Dec-2013 Greg Price <price@MIT.EDU> random: tighten bound on random_read_wakeup_thresh

We use this value in a few places other than its literal meaning,
in particular in _xfer_secondary_pool() as a minimum number of
bits to pull from the input pool at a time into either output
pool. It doesn't make sense to pull more bits than the whole size
of an output pool.

We could and possibly should separate the quantities "how much
should the input pool have to have to wake up /dev/random readers"
and "how much should we transfer from the input to an output pool
at a time", but nobody is likely to be sad they can't set the first
quantity to more than 1024 bits, so for now just limit them both.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
a58aa4edc6d2e779894b1fa95a2f4de157ff3b3b 30-Nov-2013 Greg Price <price@MIT.EDU> random: forget lock in lockless accounting

The only mutable data accessed here is ->entropy_count, but since
10b3a32d2 ("random: fix accounting race condition") we use cmpxchg to
protect our accesses to ->entropy_count here. Drop the use of the
lock.

Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
ee1de406ba6eb1e01f143fe3351e70cc772cc63e 29-Nov-2013 Greg Price <price@MIT.EDU> random: simplify accounting logic

This logic is exactly equivalent to the old logic, but it should
be easier to see what it's doing.

The equivalence depends on one fact from outside this function:
when 'r->limit' is false, 'reserved' is zero. (Well, two facts;
the other is that 'reserved' is never negative.)

Cc: Jiri Kosina <jkosina@suse.cz>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
19fa5be1d92be3112521145bf99f77007abf6b16 29-Nov-2013 Greg Price <price@MIT.EDU> random: fix comment on "account"

This comment didn't quite keep up as extract_entropy() was split into
four functions. Put each bit by the function it describes.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
12ff3a517ab92b5496c731a3c354caa1f16c569f 29-Nov-2013 Greg Price <price@MIT.EDU> random: simplify loop in random_read

The loop condition never changes until just before a break, so we
might as well write it as a constant. Also since a996996dd75a
("random: drop weird m_time/a_time manipulation") we don't do anything
after the loop finishes, so the 'break's might as well return
directly. Some other simplifications.

There should be no change in behavior introduced by this commit.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
18e9cea74951b64282964f9625db94c5d5a007bd 29-Nov-2013 Greg Price <price@MIT.EDU> random: fix description of get_random_bytes

After this remark was written, commit d2e7c96af added a use of
arch_get_random_long() inside the get_random_bytes codepath.
The main point stands, but it needs to be reworded.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
f22052b2025b78475a60dbe01b66b3120f4faefa 29-Nov-2013 Greg Price <price@MIT.EDU> random: fix comment on proc_do_uuid

There's only one function here now, as uuid_strategy is long gone.
Also make the bit about "If accesses via ..." clearer.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
dfd38750db3cc17a1da3405389a81b430720c2c6 29-Nov-2013 Greg Price <price@MIT.EDU> random: fix typos / spelling errors in comments

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
4af712e8df998475736f3e2727701bd31e3751a9 11-Nov-2013 Hannes Frederic Sowa <hannes@stressinduktion.org> random32: add prandom_reseed_late() and call when nonblocking pool becomes initialized

The Tausworthe PRNG is initialized at late_initcall time. At that time the
entropy pool serving get_random_bytes is not filled sufficiently. This
patch adds an additional reseeding step as soon as the nonblocking pool
gets marked as initialized.

On some machines it might be possible that late_initcall gets called after
the pool has been initialized. In this situation we won't reseed again.

(A call to prandom_seed_late blocks later invocations of early reseed
attempts.)

Joint work with Daniel Borkmann.

Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
392a546dc8368d1745f9891ef3f8f7c380de8650 04-Nov-2013 Theodore Ts'o <tytso@mit.edu> random: add debugging code to detect early use of get_random_bytes()

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
644008df899ec252e78db28c1b6d6b86779aada8 03-Nov-2013 Theodore Ts'o <tytso@mit.edu> random: initialize the last_time field in struct timer_rand_state

Since we initialize jiffies to wrap five minutes before boot (see
INITIAL_JIFFIES defined in include/linux/jiffies.h) it's important to
make sure the last_time field is initialized to INITIAL_JIFFIES.
Otherwise, the entropy estimator will overestimate the amount of
entropy resulting from the first call to add_timer_randomness(),
generally by about 8 bits.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
ae9ecd92ddabc250817baa7eb401df3cfbd4c2da 03-Nov-2013 Theodore Ts'o <tytso@mit.edu> random: don't zap entropy count in rand_initialize()

The rand_initialize() function was being run fairly late in the kernel
boot sequence. This was unfortunate, since it zero'ed the entropy
counters, thus throwing away credit that was accumulated earlier in
the boot sequence, and it also meant that initcall functions run
before rand_initialize were using a minimally initialized pool.

To fix this, fix init_std_data() to no longer zap the entropy counter;
it wasn't necessary, and move rand_initialize() to be an early
initcall.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
301f0595c0e788edacc3521c4caa90b4e56ffee1 03-Nov-2013 Theodore Ts'o <tytso@mit.edu> random: printk notifications for urandom pool initialization

Print a notification to the console when the nonblocking pool is
initialized. Also printk a warning when a process tries reading from
/dev/urandom before it is fully initialized.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
40db23e5337d99fda05ee6cd18034b516f8f123d 03-Nov-2013 Theodore Ts'o <tytso@mit.edu> random: make add_timer_randomness() fill the nonblocking pool first

Change add_timer_randomness() so that it directs incoming entropy to
the nonblocking pool first if it hasn't been fully initialized yet.
This matches the strategy we use in add_interrupt_randomness(), which
allows us to push the randomness where we need it the most during when
the system is first booting up, so that get_random_bytes() and
/dev/urandom become safe to use as soon as possible.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
f80bbd8b92987f55f26691cd53785c4a54622eb0 03-Oct-2013 Theodore Ts'o <tytso@mit.edu> random: convert DEBUG_ENT to tracepoints

Instead of using the random driver's ad-hoc DEBUG_ENT() mechanism, use
tracepoints instead. This allows for a much more fine-grained control
of which debugging mechanism which a developer might need, and unifies
the debugging messages with all of the existing tracepoints.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
6265e169cd313d6f3aad3c33d0a5b0d9624f69f5 03-Oct-2013 Theodore Ts'o <tytso@mit.edu> random: push extra entropy to the output pools

As the input pool gets filled, start transfering entropy to the output
pools until they get filled. This allows us to use the output pools
to store more system entropy. Waste not, want not....

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
95b709b6be49e4ff3933ef6a5b5e623de2713a71 03-Oct-2013 Theodore Ts'o <tytso@mit.edu> random: drop trickle mode

The add_timer_randomness() used to drop into trickle mode when entropy
pool was estimated to be 87.5% full. This was important when
add_timer_randomness() was used to sample interrupts. It's not used
for this any more --- add_interrupt_randomness() now uses fast_mix()
instead. By elimitating trickle mode, it allows us to fully utilize
entropy provided by add_input_randomness() and add_disk_randomness()
even when the input pool is above the old trickle threshold of 87.5%.

This helps to answer the criticism in [1] in their hypothetical
scenario where our entropy estimator was inaccurate, even though the
measurements in [2] seem to indicate that our entropy estimator given
real-life entropy collection is actually pretty good, albeit on the
conservative side (which was as it was designed).

[1] http://eprint.iacr.org/2013/338.pdf
[2] http://eprint.iacr.org/2012/251.pdf

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
6e9fa2c8a630e6d0882828012431038abce285b9 22-Sep-2013 Theodore Ts'o <tytso@mit.edu> random: adjust the generator polynomials in the mixing function slightly

Our mixing functions were analyzed by Lacharme, Roeck, Strubel, and
Videau in their paper, "The Linux Pseudorandom Number Generator
Revisited" (see: http://eprint.iacr.org/2012/251.pdf).

They suggested a slight change to improve our mixing functions
slightly. I also adjusted the comments to better explain what is
going on, and to document why the polynomials were changed.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
655b226470b229552ad95b21323864df9bd9fc74 22-Sep-2013 Theodore Ts'o <tytso@mit.edu> random: speed up the fast_mix function by a factor of four

By mixing the entropy in chunks of 32-bit words instead of byte by
byte, we can speed up the fast_mix function significantly. Since it
is called on every single interrupt, on systems with a very heavy
interrupt load, this can make a noticeable difference.

Also fix a compilation warning in add_interrupt_randomness() and avoid
xor'ing cycles and jiffies together just in case we have an
architecture which tries to define random_get_entropy() by returning
jiffies.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Reported-by: Jörn Engel <joern@logfs.org>
f5c2742c23886e707f062881c5f206c1fc704782 22-Sep-2013 Theodore Ts'o <tytso@mit.edu> random: cap the rate which the /dev/urandom pool gets reseeded

In order to avoid draining the input pool of its entropy at too high
of a rate, enforce a minimum time interval between reseedings of the
urandom pool. This is set to 60 seconds by default.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
c59974aea43fd292a0784dbf7b3d7347e2caf4e9 22-Sep-2013 Theodore Ts'o <tytso@mit.edu> random: optimize the entropy_store structure

Use smaller types to slightly shrink the size of the entropy store
structure.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
3ef4cb2d65ee13d84140cbede8e1980c6ae49ffd 12-Sep-2013 Theodore Ts'o <tytso@mit.edu> random: optimize spinlock use in add_device_randomness()

The add_device_randomness() function calls mix_pool_bytes() twice for
the input pool and the non-blocking pool, for a total of four times.
By using _mix_pool_byte() and taking the spinlock in
add_device_randomness(), we can halve the number of times we need
take each pool's spinlock.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
5910895f0e868d4f70303922ed00ccdc328b3c30 12-Sep-2013 Theodore Ts'o <tytso@mit.edu> random: fix the tracepoint for get_random_bytes(_arch)

Fix a problem where get_random_bytes_arch() was calling the tracepoint
get_random_bytes(). So add a new tracepoint for
get_random_bytes_arch(), and make get_random_bytes() and
get_random_bytes_arch() call their correct tracepoint.

Also, add a new tracepoint for add_device_randomness()

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
30e37ec516ae5a6957596de7661673c615c82ea4 11-Sep-2013 H. Peter Anvin <hpa@linux.intel.com> random: account for entropy loss due to overwrites

When we write entropy into a non-empty pool, we currently don't
account at all for the fact that we will probabilistically overwrite
some of the entropy in that pool. This means that unless the pool is
fully empty, we are currently *guaranteed* to overestimate the amount
of entropy in the pool!

Assuming Shannon entropy with zero correlations we end up with an
exponentally decaying value of new entropy added:

entropy <- entropy + (pool_size - entropy) *
(1 - exp(-add_entropy/pool_size))

However, calculations involving fractional exponentials are not
practical in the kernel, so apply a piecewise linearization:

For add_entropy <= pool_size/2 then

(1 - exp(-add_entropy/pool_size)) >= (add_entropy/pool_size)*0.7869...

... so we can approximate the exponential with
3/4*add_entropy/pool_size and still be on the
safe side by adding at most pool_size/2 at a time.

In order for the loop not to take arbitrary amounts of time if a bad
ioctl is received, terminate if we are within one bit of full. This
way the loop is guaranteed to terminate after no more than
log2(poolsize) iterations, no matter what the input value is. The
vast majority of the time the loop will be executed exactly once.

The piecewise linearization is very conservative, approaching 3/4 of
the usable input value for small inputs, however, our entropy
estimation is pretty weak at best, especially for small values; we
have no handle on correlation; and the Shannon entropy measure (Rényi
entropy of order 1) is not the correct one to use in the first place,
but rather the correct entropy measure is the min-entropy, the Rényi
entropy of infinite order.

As such, this conservatism seems more than justified.

This does introduce fractional bit values. I have left it to have 3
bits of fraction, so that with a pool of 2^12 bits the multiply in
credit_entropy_bits() can still fit into an int, as 2*(3+12) < 31. It
is definitely possible to allow for more fractional accounting, but
that multiply then would have to be turned into a 32*32 -> 64 multiply.

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: DJ Johnston <dj.johnston@intel.com>
a283b5c459784f9762a74c312b134e9a524f5a5f 11-Sep-2013 H. Peter Anvin <hpa@zytor.com> random: allow fractional bits to be tracked

Allow fractional bits of entropy to be tracked by scaling the entropy
counter (fixed point). This will be used in a subsequent patch that
accounts for entropy lost due to overwrites.

[ Modified by tytso to fix up a few missing places where the
entropy_count wasn't properly converted from fractional bits to
bits. ]

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
9ed17b70b409dc48c134a80b5a6df582ba759de2 11-Sep-2013 H. Peter Anvin <hpa@linux.intel.com> random: statically compute poolbitshift, poolbytes, poolbits

Use a macro to statically compute poolbitshift (will be used in a
subsequent patch), poolbytes, and poolbits. On virtually all
architectures the cost of a memory load with an offset is the same as
the one of a memory load.

It is still possible for this to generate worse code since the C
compiler doesn't know the fixed relationship between these fields, but
that is somewhat unlikely.

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
85a1f77716cf546d9b9c42e2848b5712f51ba1ee 22-Sep-2013 Theodore Ts'o <tytso@mit.edu> random: mix in architectural randomness earlier in extract_buf()

Previously if CPU chip had a built-in random number generator (i.e.,
RDRAND on newer x86 chips), we mixed it in at the very end of
extract_buf() using an XOR operation.

We now mix it in right after the calculate a hash across the entire
pool. This has the advantage that any contribution of entropy from
the CPU's HWRNG will get mixed back into the pool. In addition, it
means that if the HWRNG has any defects (either accidentally or
maliciously introduced), this will be mitigated via the non-linear
transform of the SHA-1 hash function before we hand out generated
output.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
61875f30daf60305712e25b209ef41ced2635bad 21-Sep-2013 Theodore Ts'o <tytso@mit.edu> random: allow architectures to optionally define random_get_entropy()

Allow architectures which have a disabled get_cycles() function to
provide a random_get_entropy() function which provides a fine-grained,
rapidly changing counter that can be used by the /dev/random driver.

For example, an architecture might have a rapidly changing register
used to control random TLB cache eviction, or DRAM refresh that
doesn't meet the requirements of get_cycles(), but which is good
enough for the needs of the random driver.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org
47d06e532e95b71c0db3839ebdef3fe8812fca2c 10-Sep-2013 Theodore Ts'o <tytso@mit.edu> random: run random_int_secret_init() run after all late_initcalls

The some platforms (e.g., ARM) initializes their clocks as
late_initcalls for some unknown reason. So make sure
random_int_secret_init() is run after all of the late_initcalls are
run.

Cc: stable@vger.kernel.org
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
0244ad004a54e39308d495fee0a2e637f8b5c317 30-Aug-2013 Martin Schwidefsky <schwidefsky@de.ibm.com> Remove GENERIC_HARDIRQ config option

After the last architecture switched to generic hard irqs the config
options HAVE_GENERIC_HARDIRQS & GENERIC_HARDIRQS and the related code
for !CONFIG_GENERIC_HARDIRQS can be removed.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
a151427ed086952cc28f1d5f1cda84c33e48e358 14-Jun-2013 Joe Perches <joe@perches.com> char: Convert use of typedef ctl_table to struct ctl_table

This typedef is unnecessary and should just be removed.

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
10b3a32d292c21ea5b3ad5ca5975e88bb20b8d68 25-May-2013 Jiri Kosina <jkosina@suse.cz> random: fix accounting race condition with lockless irq entropy_count update

Commit 902c098a3663 ("random: use lockless techniques in the interrupt
path") turned IRQ path from being spinlock protected into lockless
cmpxchg-retry update.

That commit removed r->lock serialization between crediting entropy bits
from IRQ context and accounting when extracting entropy on userspace
read path, but didn't turn the r->entropy_count reads/updates in
account() to use cmpxchg as well.

It has been observed, that under certain circumstances this leads to
read() on /dev/urandom to return 0 (EOF), as r->entropy_count gets
corrupted and becomes negative, which in turn results in propagating 0
all the way from account() to the actual read() call.

Convert the accounting code to be the proper lockless counterpart of
what has been partially done by 902c098a3663.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Greg KH <greg@kroah.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1e7e2e05c179a68aaf8830fe91547a87f4589e53 25-May-2013 Jarod Wilson <jarod@redhat.com> drivers/char/random.c: fix priming of last_data

Commit ec8f02da9ea5 ("random: prime last_data value per fips
requirements") added priming of last_data per fips requirements.

Unfortuantely, it did so in a way that can lead to multiple threads all
incrementing nbytes, but only one actually doing anything with the extra
data, which leads to some fun random corruption and panics.

The fix is to simply do everything needed to prime last_data in a single
shot, so there's no window for multiple cpus to increment nbytes -- in
fact, we won't even increment or decrement nbytes anymore, we'll just
extract the needed EXTRACT_SIZE one time per pool and then carry on with
the normal routine.

All these changes have been tested across multiple hosts and
architectures where panics were previously encoutered. The code changes
are are strictly limited to areas only touched when when booted in fips
mode.

This change should also go into 3.8-stable, to make the myriads of fips
users on 3.8.x happy.

Signed-off-by: Jarod Wilson <jarod@redhat.com>
Tested-by: Jan Stancek <jstancek@redhat.com>
Tested-by: Jan Stodola <jstodola@redhat.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Matt Mackall <mpm@selenic.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
16c7fa05829e8b91db48e3539c5d6ff3c2b18a23 01-May-2013 Andy Shevchenko <andriy.shevchenko@linux.intel.com> lib/string_helpers: introduce generic string_unescape

There are several places in kernel where modules unescapes input to convert
C-Style Escape Sequences into byte codes.

The patch provides generic implementation of such approach. Test cases are
also included into the patch.

[akpm@linux-foundation.org: clarify comment]
[akpm@linux-foundation.org: export get_random_int() to modules]
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: William Hubbs <w.d.hubbs@gmail.com>
Cc: Chris Brannon <chris@the-brannons.com>
Cc: Kirk Reiser <kirk@braille.uwo.ca>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1f8e8ed05184eed5f9adf48fb2f6be00a907a181 09-Apr-2012 Kent Overstreet <koverstreet@google.com> Export get_random_int()

Needed for bcache - need a cheap source of random numbers for perturbing
IO sizes, for rate limiting IO to the SSD.

Signed-off-by: Kent Overstreet <koverstreet@google.com>
CC: "Theodore Ts'o" <tytso@mit.edu>
b980955236922ae6106774511c5c05003d3ad225 04-Mar-2013 Theodore Ts'o <tytso@mit.edu> random: fix locking dependency with the tasklist_lock

Commit 6133705494bb introduced a circular lock dependency because
posix_cpu_timers_exit() is called by release_task(), which is holding
a writer lock on tasklist_lock, and this can cause a deadlock since
kill_fasync() gets called with nonblocking_pool.lock taken.

There's no reason why kill_fasync() needs to be taken while the random
pool is locked, so move it out to fix this locking dependency.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Reported-by: Russ Dill <Russ.Dill@gmail.com>
Cc: stable@kernel.org
eece09ec213e93333010bf4c6bb9175b32229c54 17-Jul-2011 Thomas Gleixner <tglx@linutronix.de> locking: Various static lock initializer fixes

The static lock initializers want to be fed the proper name of the
lock and not some random string. In mainline random strings are
obfuscating the readability of debug output, but for RT they prevent
the spinlock substitution. Fix it up.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
ec8f02da9ea500474417d1d31fa3d46a562ab366 06-Nov-2012 Jarod Wilson <jarod@redhat.com> random: prime last_data value per fips requirements

The value stored in last_data must be primed for FIPS 140-2 purposes. Upon
first use, either on system startup or after an RNDCLEARPOOL ioctl, we
need to take an initial random sample, store it internally in last_data,
then pass along the value after that to the requester, so that consistency
checks aren't being run against stale and possibly known data.

CC: Herbert Xu <herbert@gondor.apana.org.au>
CC: "David S. Miller" <davem@davemloft.net>
CC: Matt Mackall <mpm@selenic.com>
CC: linux-crypto@vger.kernel.org
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
8eb2ffbf7be94c546a873540ff952140465125e5 15-Oct-2012 Jiri Kosina <jkosina@suse.cz> random: fix debug format strings

Fix the following warnings in formatting debug output:

drivers/char/random.c: In function ‘xfer_secondary_pool’:
drivers/char/random.c:827: warning: format ‘%d’ expects type ‘int’, but argument 7 has type ‘size_t’
drivers/char/random.c: In function ‘account’:
drivers/char/random.c:859: warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘size_t’
drivers/char/random.c:881: warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘size_t’
drivers/char/random.c: In function ‘random_read’:
drivers/char/random.c:1141: warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘ssize_t’
drivers/char/random.c:1145: warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘ssize_t’
drivers/char/random.c:1145: warning: format ‘%d’ expects type ‘int’, but argument 6 has type ‘long unsigned int’

by using '%zd' instead of '%d' to properly denote ssize_t/size_t conversion.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
be5b779ae9ce64ede0a8f4939360b0320bb257e2 15-Oct-2012 Jiri Kosina <jkosina@suse.cz> random: make it possible to enable debugging without rebuild

The module parameter that turns debugging mode (which basically means
printing a few extra lines during runtime) is in '#if 0' block. Forcing
everyone who would like to see how entropy is behaving on his system to
rebuild seems to be a little bit too harsh.

If we were concerned about speed, we could potentially turn 'debug' into a
static key, but I don't think it's necessary.

Drop the '#if 0' block to allow using the 'debug' parameter without rebuilding.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
d2e7c96af1e54b507ae2a6a7dd2baf588417a7e5 28-Jul-2012 H. Peter Anvin <hpa@linux.intel.com> random: mix in architectural randomness in extract_buf()

Mix in any architectural randomness in extract_buf() instead of
xfer_secondary_buf(). This allows us to mix in more architectural
randomness, and it also makes xfer_secondary_buf() faster, moving a
tiny bit of additional CPU overhead to process which is extracting the
randomness.

[ Commit description modified by tytso to remove an extended
advertisement for the RDRAND instruction. ]

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: DJ Johnston <dj.johnston@intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
cbc96b7594b5691d61eba2db8b2ea723645be9ca 23-Jul-2012 Tony Luck <tony.luck@intel.com> random: Add comment to random_initialize()

Many platforms have per-machine instance data (serial numbers,
asset tags, etc.) squirreled away in areas that are accessed
during early system bringup. Mixing this data into the random
pools has a very high value in providing better random data,
so we should allow (and even encourage) architecture code to
call add_device_randomness() from the setup_arch() paths.

However, this limits our options for internal structure of
the random driver since random_initialize() is not called
until long after setup_arch().

Add a big fat comment to rand_initialize() spelling out
this requirement.

Suggested-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
c5857ccf293968348e5eb4ebedc68074de3dcda6 15-Jul-2012 Theodore Ts'o <tytso@mit.edu> random: remove rand_initialize_irq()

With the new interrupt sampling system, we are no longer using the
timer_rand_state structure in the irq descriptor, so we can stop
initializing it now.

[ Merged in fixes from Sedat to find some last missing references to
rand_initialize_irq() ]

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Sedat Dilek <sedat.dilek@gmail.com>
00ce1db1a634746040ace24c09a4e3a7949a3145 04-Jul-2012 Theodore Ts'o <tytso@mit.edu> random: add tracepoints for easier debugging and verification

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
c2557a303ab6712bb6e09447df828c557c710ac9 05-Jul-2012 Theodore Ts'o <tytso@mit.edu> random: add new get_random_bytes_arch() function

Create a new function, get_random_bytes_arch() which will use the
architecture-specific hardware random number generator if it is
present. Change get_random_bytes() to not use the HW RNG, even if it
is avaiable.

The reason for this is that the hw random number generator is fast (if
it is present), but it requires that we trust the hardware
manufacturer to have not put in a back door. (For example, an
increasing counter encrypted by an AES key known to the NSA.)

It's unlikely that Intel (for example) was paid off by the US
Government to do this, but it's impossible for them to prove otherwise
--- especially since Bull Mountain is documented to use AES as a
whitener. Hence, the output of an evil, trojan-horse version of
RDRAND is statistically indistinguishable from an RDRAND implemented
to the specifications claimed by Intel. Short of using a tunnelling
electronic microscope to reverse engineer an Ivy Bridge chip and
disassembling and analyzing the CPU microcode, there's no way for us
to tell for sure.

Since users of get_random_bytes() in the Linux kernel need to be able
to support hardware systems where the HW RNG is not present, most
time-sensitive users of this interface have already created their own
cryptographic RNG interface which uses get_random_bytes() as a seed.
So it's much better to use the HW RNG to improve the existing random
number generator, by mixing in any entropy returned by the HW RNG into
/dev/random's entropy pool, but to always _use_ /dev/random's entropy
pool.

This way we get almost of the benefits of the HW RNG without any
potential liabilities. The only benefits we forgo is the
speed/performance enhancements --- and generic kernel code can't
depend on depend on get_random_bytes() having the speed of a HW RNG
anyway.

For those places that really want access to the arch-specific HW RNG,
if it is available, we provide get_random_bytes_arch().

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org
e6d4947b12e8ad947add1032dd754803c6004824 05-Jul-2012 Theodore Ts'o <tytso@mit.edu> random: use the arch-specific rng in xfer_secondary_pool

If the CPU supports a hardware random number generator, use it in
xfer_secondary_pool(), where it will significantly improve things and
where we can afford it.

Also, remove the use of the arch-specific rng in
add_timer_randomness(), since the call is significantly slower than
get_cycles(), and we're much better off using it in
xfer_secondary_pool() anyway.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org
a2080a67abe9e314f9e9c2cc3a4a176e8a8f8793 04-Jul-2012 Linus Torvalds <torvalds@linux-foundation.org> random: create add_device_randomness() interface

Add a new interface, add_device_randomness() for adding data to the
random pool that is likely to differ between two devices (or possibly
even per boot). This would be things like MAC addresses or serial
numbers, or the read-out of the RTC. This does *not* add any actual
entropy to the pool, but it initializes the pool to different values
for devices that might otherwise be identical and have very little
entropy available to them (particularly common in the embedded world).

[ Modified by tytso to mix in a timestamp, since there may be some
variability caused by the time needed to detect/configure the hardware
in question. ]

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org
902c098a3663de3fa18639efbb71b6080f0bcd3c 04-Jul-2012 Theodore Ts'o <tytso@mit.edu> random: use lockless techniques in the interrupt path

The real-time Linux folks don't like add_interrupt_randomness() taking
a spinlock since it is called in the low-level interrupt routine.
This also allows us to reduce the overhead in the fast path, for the
random driver, which is the interrupt collection path.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org
775f4b297b780601e61787b766f306ed3e1d23eb 02-Jul-2012 Theodore Ts'o <tytso@mit.edu> random: make 'add_interrupt_randomness()' do something sane

We've been moving away from add_interrupt_randomness() for various
reasons: it's too expensive to do on every interrupt, and flooding the
CPU with interrupts could theoretically cause bogus floods of entropy
from a somewhat externally controllable source.

This solves both problems by limiting the actual randomness addition
to just once a second or after 64 interrupts, whicever comes first.
During that time, the interrupt cycle data is buffered up in a per-cpu
pool. Also, we make sure the the nonblocking pool used by urandom is
initialized before we start feeding the normal input pool. This
assures that /dev/urandom is returning unpredictable data as soon as
possible.

(Based on an original patch by Linus, but significantly modified by
tytso.)

Tested-by: Eric Wustrow <ewust@umich.edu>
Reported-by: Eric Wustrow <ewust@umich.edu>
Reported-by: Nadia Heninger <nadiah@cs.ucsd.edu>
Reported-by: Zakir Durumeric <zakir@umich.edu>
Reported-by: J. Alex Halderman <jhalderm@umich.edu>.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org
74feec5dd83d879368c1081aec5b6a1cb6dd7ce2 06-Jul-2012 Theodore Ts'o <tytso@mit.edu> random: fix up sparse warnings

Add extern and static declarations to suppress sparse warnings

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
44e4360fa3384850d65dd36fb4e6e5f2f112709b 12-Apr-2012 Mathieu Desnoyers <mathieu.desnoyers@efficios.com> drivers/char/random.c: fix boot id uniqueness race

/proc/sys/kernel/random/boot_id can be read concurrently by userspace
processes. If two (or more) user-space processes concurrently read
boot_id when sysctl_bootid is not yet assigned, a race can occur making
boot_id differ between the reads. Because the whole point of the boot id
is to be unique across a kernel execution, fix this by protecting this
operation with a spinlock.

Given that this operation is not frequently used, hitting the spinlock
on each call should not be an issue.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Matt Mackall <mpm@selenic.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Greg Kroah-Hartman <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2dac8e54f988ab58525505d7ef982493374433c3 16-Jan-2012 H. Peter Anvin <hpa@linux.intel.com> random: Adjust the number of loops when initializing

When we are initializing using arch_get_random_long() we only need to
loop enough times to touch all the bytes in the buffer; using
poolwords for that does twice the number of operations necessary on a
64-bit machine, since in the random number generator code "word" means
32 bits.

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Link: http://lkml.kernel.org/r/1324589281-31931-1-git-send-email-tytso@mit.edu
3e88bdff1c65145f7ba297ccec69c774afe4c785 22-Dec-2011 Theodore Ts'o <tytso@mit.edu> random: Use arch-specific RNG to initialize the entropy store

If there is an architecture-specific random number generator (such as
RDRAND for Intel architectures), use it to initialize /dev/random's
entropy stores. Even in the worst case, if RDRAND is something like
AES(NSA_KEY, counter++), it won't hurt, and it will definitely help
against any other adversaries.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Link: http://lkml.kernel.org/r/1324589281-31931-1-git-send-email-tytso@mit.edu
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
90ab5ee94171b3e28de6bb42ee30b527014e0be7 13-Jan-2012 Rusty Russell <rusty@rustcorp.com.au> module_param: make bool parameters really bool (drivers & misc)

module_param(bool) used to counter-intuitively take an int. In
fddd5201 (mid-2009) we allowed bool or int/unsigned int using a messy
trick.

It's time to remove the int/unsigned int option. For this version
it'll simply give a warning, but it'll break next kernel version.

Acked-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
cf833d0b9937874b50ef2867c4e8badfd64948ce 22-Dec-2011 Linus Torvalds <torvalds@linux-foundation.org> random: Use arch_get_random_int instead of cycle counter if avail

We still don't use rdrand in /dev/random, which just seems stupid. We
accept the *cycle*counter* as a random input, but we don't accept
rdrand? That's just broken.

Sure, people can do things in user space (write to /dev/random, use
rdrand in addition to /dev/random themselves etc etc), but that
*still* seems to be a particularly stupid reason for saying "we
shouldn't bother to try to do better in /dev/random".

And even if somebody really doesn't trust rdrand as a source of random
bytes, it seems singularly stupid to trust the cycle counter *more*.

So I'd suggest the attached patch. I'm not going to even bother
arguing that we should add more bits to the entropy estimate, because
that's not the point - I don't care if /dev/random fills up slowly or
not, I think it's just stupid to not use the bits we can get from
rdrand and mix them into the strong randomness pool.

Link: http://lkml.kernel.org/r/CA%2B55aFwn59N1=m651QAyTy-1gO1noGbK18zwKDwvwqnravA84A@mail.gmail.com
Acked-by: "David S. Miller" <davem@davemloft.net>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
bd29e568a4cb6465f6e5ec7c1c1f3ae7d99cbec1 16-Nov-2011 Luck, Tony <tony.luck@intel.com> fix typo/thinko in get_random_bytes()

If there is an architecture-specific random number generator we use it
to acquire randomness one "long" at a time. We should put these random
words into consecutive words in the result buffer - not just overwrite
the first word again and again.

Signed-off-by: Tony Luck <tony.luck@intel.com>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
0d2f096b8785b67c38afcf6e1fbb9674af2e05ca 16-Nov-2011 Luck, Tony <tony.luck@intel.com> random: Fix handing of arch_get_random_long in get_random_bytes()

If there is an architecture-specific random number generator we use
it to acquire randomness one "long" at a time. We should put these
random words into consecutive words in the result buffer - not just
overwrite the first word again and again.

Signed-off-by: Tony Luck <tony.luck@intel.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/4ec4061010261a4cb0@agluck-desktop.sc.intel.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
6e5714eaf77d79ae1c8b47e3e040ff5411b717ec 04-Aug-2011 David S. Miller <davem@davemloft.net> net: Compute protocol sequence numbers and fragment IDs using MD5.

Computers have become a lot faster since we compromised on the
partial MD4 hash which we use currently for performance reasons.

MD5 is a much safer choice, and is inline with both RFC1948 and
other ISS generators (OpenBSD, Solaris, etc.)

Furthermore, only having 24-bits of the sequence number be truly
unpredictable is a very serious limitation. So the periodic
regeneration and 8-bit counter have been removed. We compute and
use a full 32-bit sequence number.

For ipv6, DCCP was found to use a 32-bit truncated initial sequence
number (it needs 43-bits) and that is fixed here as well.

Reported-by: Dan Kaminsky <dan@doxpara.com>
Tested-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: David S. Miller <davem@davemloft.net>
63d77173266c1791f1553e9e8ccea65dc87c4485 31-Jul-2011 H. Peter Anvin <hpa@zytor.com> random: Add support for architectural random hooks

Add support for architecture-specific hooks into the kernel-directed
random number generator interfaces. This patchset does not use the
architecture random number generator interfaces for the
userspace-directed interfaces (/dev/random and /dev/urandom), thus
eliminating the need to distinguish between them based on a pool
pointer.

Changes in version 3:
- Moved the hooks from extract_entropy() to get_random_bytes().
- Changes the hooks to inlines.

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "Theodore Ts'o" <tytso@mit.edu>
87c48fa3b4630905f98268dde838ee43626a060c 22-Jul-2011 Eric Dumazet <eric.dumazet@gmail.com> ipv6: make fragment identifications less predictable

IPv6 fragment identification generation is way beyond what we use for
IPv4 : It uses a single generator. Its not scalable and allows DOS
attacks.

Now inetpeer is IPv6 aware, we can use it to provide a more secure and
scalable frag ident generator (per destination, instead of system wide)

This patch :
1) defines a new secure_ipv6_id() helper
2) extends inet_getid() to provide 32bit results
3) extends ipv6_select_ident() with a new dest parameter

Reported-by: Fernando Gont <fernando@gont.com.ar>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
25985edcedea6396277003854657b5f3cb31a628 31-Mar-2011 Lucas De Marchi <lucas.demarchi@profusion.mobi> Fix common misspellings

Fixes generated by 'codespell' and manually reviewed.

Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
442a4fffffa26fc3080350b4d50172f7589c3ac2 21-Feb-2011 Jarod Wilson <jarod@redhat.com> random: update interface comments to reflect reality

At present, the comment header in random.c makes no mention of
add_disk_randomness, and instead, suggests that disk activity adds to the
random pool by way of add_interrupt_randomness, which appears to not have
been the case since sometime prior to the existence of git, and even prior
to bitkeeper. Didn't look any further back. At least, as far as I can
tell, there are no storage drivers setting IRQF_SAMPLE_RANDOM, which is a
requirement for add_interrupt_randomness to trigger, so the only way for a
disk to contribute entropy is by way of add_disk_randomness. Update
comments accordingly, complete with special mention about solid state
drives being a crappy source of entropy (see e2e1a148bc for reference).

Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
b29c617af3b09d150d3889836c24d39564b39180 06-Dec-2010 Christoph Lameter <cl@linux.com> random: Use this_cpu_inc_return

__this_cpu_inc can create a single instruction to do the same as
__get_cpu_var()++.

Cc: Richard Kennedy <richard@rsk.demon.co.uk>
Cc: Matt Mackall <mpm@selenic.com>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
6038f373a3dc1f1c26496e60b6c40b164716f07e 15-Aug-2010 Arnd Bergmann <arnd@arndb.de> llseek: automatically add .llseek fop

All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.

The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.

New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.

The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.

Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.

Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.

===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}

@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}

@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}

@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}

@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}

@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}

@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};

@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};

@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};

@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};

@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};

// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};

@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};

// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};

// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};

// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};

@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};

// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////

@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};

@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};

@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};

@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
4015d9a865e3bcc42d88bedc8ce1551000bab664 31-Jul-2010 Richard Kennedy <richard@rsk.demon.co.uk> random: Reorder struct entropy_store to remove padding on 64bits

Re-order structure entropy_store to remove 8 bytes of padding on
64 bit builds, so shrinking this structure from 72 to 64 bytes
and allowing it to fit into one cache line.

Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk>
Signed-off-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
e954bc91bdd4bb08b8325478c5004b24a23a3522 20-May-2010 Matt Mackall <mpm@selenic.com> random: simplify fips mode

Rather than dynamically allocate 10 bytes, move it to static allocation.
This saves space and avoids the need for error checking.

Signed-off-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
c41b20e721ea4f6f20f66a66e7f0c3c97a2ca9c2 11-Dec-2009 Adam Buchbinder <adam.buchbinder@gmail.com> Fix misspellings of "truly" in comments.

Some comments misspell "truly"; this fixes them. No code changes.

Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
cd1510cb5f892907fe1a662f90b41fb3a42954e0 01-Feb-2010 Herbert Xu <herbert@gondor.apana.org.au> random: Remove unused inode variable

The previous changeset left behind an unused inode variable.
This patch removes it.

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
a996996dd75a9086b12d1cb4010f26e1748993f0 29-Jan-2010 Matt Mackall <mpm@selenic.com> random: drop weird m_time/a_time manipulation

No other driver does anything remotely like this that I know of except
for the tty drivers, and I can't see any reason for random/urandom to do
it. In fact, it's a (trivial, harmless) timing information leak. And
obviously, it generates power- and flash-cycle wasting I/O, especially
if combined with something like hwrngd. Also, it breaks ubifs's
expectations.

Signed-off-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
35900771c06cee858b725ef7069fb6934691b319 15-Dec-2009 Joe Perches <joe@perches.com> random.c: use %pU to print UUIDs

Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
6d4561110a3e9fa742aeec6717248a491dfb1878 16-Nov-2009 Eric W. Biederman <ebiederm@xmission.com> sysctl: Drop & in front of every proc_handler.

For consistency drop & in front of every proc_handler. Explicity
taking the address is unnecessary and it prevents optimizations
like stubbing the proc_handlers to NULL.

Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Joe Perches <joe@perches.com>
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
894d2491153a9f8270dbed21175d06fde4eba6c7 05-Nov-2009 Eric W. Biederman <ebiederm@xmission.com> sysctl drivers: Remove dead binary sysctl support

Now that sys_sysctl is a wrapper around /proc/sys all of
the binary sysctl support elsewhere in the tree is
dead code.

Cc: Jens Axboe <axboe@kernel.dk>
Cc: Corey Minyard <minyard@acm.org>
Cc: Greg Kroah-Hartman <gregkh@suse.de>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Neil Brown <neilb@suse.de>
Cc: "James E.J. Bottomley" <James.Bottomley@suse.de>
Acked-by: Clemens Ladisch <clemens@ladisch.de> for drivers/char/hpet.c
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
8d65af789f3e2cf4cfbdbf71a0f7a61ebcd41d38 24-Sep-2009 Alexey Dobriyan <adobriyan@gmail.com> sysctl: remove "struct file *" argument of ->proc_handler

It's unused.

It isn't needed -- read or write flag is already passed and sysctl
shouldn't care about the rest.

It _was_ used in two places at arch/frv for some reason.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: James Morris <jmorris@namei.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
5b739ef8a4e8cf5201d21abff897e292c232477b 18-Jun-2009 Neil Horman <nhorman@tuxdriver.com> random: Add optional continuous repetition test to entropy store based rngs

FIPS-140 requires that all random number generators implement continuous self
tests in which each extracted block of data is compared against the last block
for repetition. The ansi_cprng implements such a test, but it would be nice if
the hw rng's did the same thing. Obviously its not something thats always
needed, but it seems like it would be a nice feature to have on occasion. I've
written the below patch which allows individual entropy stores to be flagged as
desiring a continuous test to be run on them as is extracted. By default this
option is off, but is enabled in the event that fips mode is selected during
bootup.

Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
26a9a418237c0b06528941bca693c49c8d97edbe 19-May-2009 Linus Torvalds <torvalds@linux-foundation.org> Avoid ICE in get_random_int() with gcc-3.4.5

Martin Knoblauch reports that trying to build 2.6.30-rc6-git3 with
RHEL4.3 userspace (gcc (GCC) 3.4.5 20051201 (Red Hat 3.4.5-2)) causes an
internal compiler error (ICE):

drivers/char/random.c: In function `get_random_int':
drivers/char/random.c:1672: error: unrecognizable insn:
(insn 202 148 150 0 /scratch/build/linux-2.6.30-rc6-git3/arch/x86/include/asm/tsc.h:23 (set (reg:SI 0 ax [91])
(subreg:SI (plus:DI (plus:DI (reg:DI 0 ax [88])
(subreg:DI (reg:SI 6 bp) 0))
(const_int -4 [0xfffffffffffffffc])) 0)) -1 (nil)
(nil))
drivers/char/random.c:1672: internal compiler error: in extract_insn, at recog.c:2083

and after some debugging it turns out that it's due to the code trying
to figure out the rough value of the current stack pointer by taking an
address of an uninitialized variable and casting that to an integer.

This is clearly a compiler bug, but it's not worth fighting - while the
current stack kernel pointer might be somewhat hard to predict in user
space, it's also not generally going to change for a lot of the call
chains for a particular process.

So just drop it, and mumble some incoherent curses at the compiler.

Tested-by: Martin Knoblauch <spamtrap@knobisoft.de>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
8a0a9bd4db63bc45e3017bedeafbd88d0eb84d02 05-May-2009 Linus Torvalds <torvalds@linux-foundation.org> random: make get_random_int() more random

It's a really simple patch that basically just open-codes the current
"secure_ip_id()" call, but when open-coding it we now use a _static_
hashing area, so that it gets updated every time.

And to make sure somebody can't just start from the same original seed of
all-zeroes, and then do the "half_md4_transform()" over and over until
they get the same sequence as the kernel has, each iteration also mixes in
the same old "current->pid + jiffies" we used - so we should now have a
regular strong pseudo-number generator, but we also have one that doesn't
have a single seed.

Note: the "pid + jiffies" is just meant to be a tiny tiny bit of noise. It
has no real meaning. It could be anything. I just picked the previous
seed, it's just that now we keep the state in between calls and that will
feed into the next result, and that should make all the difference.

I made that hash be a per-cpu data just to avoid cache-line ping-pong:
having multiple CPU's write to the same data would be fine for randomness,
and add yet another layer of chaos to it, but since get_random_int() is
supposed to be a fast interface I did it that way instead. I considered
using "__raw_get_cpu_var()" to avoid any preemption overhead while still
getting the hash be _mostly_ ping-pong free, but in the end good taste won
out.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
417b43d4b728619e9bcc2da4fa246a6350d46667 03-Apr-2009 Anton Blanchard <anton@samba.org> random: align rekey_work's timer

Align rekey_work. Even though it's infrequent, we may as well line it up.

Signed-off-by: Anton Blanchard <anton@samba.org>
Acked-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
d178a1eb5c034df1f74a2b67bf311afa5d6b8e95 11-Jan-2009 Yinghai Lu <yinghai@kernel.org> sparseirq: fix build with unknown irq_desc struct

Ingo Molnar wrote:
>
> tip/kernel/fork.c: In function 'copy_signal':
> tip/kernel/fork.c:825: warning: unused variable 'ret'
> tip/drivers/char/random.c: In function 'get_timer_rand_state':
> tip/drivers/char/random.c:584: error: dereferencing pointer to incomplete type
> tip/drivers/char/random.c: In function 'set_timer_rand_state':
> tip/drivers/char/random.c:594: error: dereferencing pointer to incomplete type
> make[3]: *** [drivers/char/random.o] Error 1

irq_desc is defined in linux/irq.h, so include it in the genirq case.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
d7e51e66899f95dabc89b4d4c6674a6e50fa37fc 08-Jan-2009 Yinghai Lu <yinghai@kernel.org> sparseirq: make some func to be used with genirq

Impact: clean up sparseirq fallout on random.c

Ingo suggested to change some ifdef from SPARSE_IRQ to GENERIC_HARDIRQS
so we could some #ifdef later if all arch support genirq

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
cda796a3d572059d64f5429dfc1d00ca6fcbaf8d 06-Jan-2009 Matt Mackall <mpm@selenic.com> random: don't try to look at entropy_count outside the lock

As a non-atomic value, it's only safe to look at entropy_count when the
pool lock is held, so we move the BUG_ON inside the lock for correctness.

Also remove the spurious comment. It's ok for entropy_count to
temporarily exceed POOLBITS so long as it's left in a consistent state
when the lock is released.

This is a more correct, simple, and idiomatic fix for the bug in
8b76f46a2db. I've left the reorderings introduced by that patch in place
as they're harmless, even though they don't properly deal with potential
atomicity issues.

Signed-off-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2f983570010a0dcb26d988da02d7ccfad00c807c 03-Jan-2009 Yinghai Lu <yinghai@kernel.org> sparseirq: move set/get_timer_rand_state back to .c

those two functions only used in that C file

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
0b8f1efad30bd58f89961b82dfe68b9edf8fd2ac 06-Dec-2008 Yinghai Lu <yinghai@kernel.org> sparse irq_desc[] array: core kernel and x86 changes

Impact: new feature

Problem on distro kernels: irq_desc[NR_IRQS] takes megabytes of RAM with
NR_CPUS set to large values. The goal is to be able to scale up to much
larger NR_IRQS value without impacting the (important) common case.

To solve this, we generalize irq_desc[NR_IRQS] to an (optional) array of
irq_desc pointers.

When CONFIG_SPARSE_IRQ=y is used, we use kzalloc_node to get irq_desc,
this also makes the IRQ descriptors NUMA-local (to the site that calls
request_irq()).

This gets rid of the irq_cfg[] static array on x86 as well: irq_cfg now
uses desc->chip_data for x86 to store irq_cfg.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
233e70f4228e78eb2f80dc6650f65d3ae3dbf17c 01-Nov-2008 Al Viro <viro@ZenIV.linux.org.uk> saner FASYNC handling on file close

As it is, all instances of ->release() for files that have ->fasync()
need to remember to evict file from fasync lists; forgetting that
creates a hole and we actually have a bunch that *does* forget.

So let's keep our lives simple - let __fput() check FASYNC in
file->f_flags and call ->fasync() there if it's been set. And lose that
crap in ->release() instances - leaving it there is still valid, but we
don't have to bother anymore.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
f221e726bf4e082a05dcd573379ac859bfba7126 16-Oct-2008 Alexey Dobriyan <adobriyan@gmail.com> sysctl: simplify ->strategy

name and nlen parameters passed to ->strategy hook are unused, remove
them. In general ->strategy hook should know what it's doing, and don't
do something tricky for which, say, pointer to original userspace array
may be needed (name).

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Acked-by: David S. Miller <davem@davemloft.net> [ networking bits ]
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Matt Mackall <mpm@selenic.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
d6c88a507ef0b6afdb013cba4e7804ba7324d99a 15-Oct-2008 Thomas Gleixner <tglx@linutronix.de> genirq: revert dynarray

Revert the dynarray changes. They need more thought and polishing.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2cc21ef843d4fb7da122239b644a1f6f0aca60a6 15-Oct-2008 Thomas Gleixner <tglx@linutronix.de> genirq: remove sparse irq code

This code is not ready, but we need to rip it out instead of rebasing
as we would lose the APIC/IO_APIC unification otherwise.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
3060d6fe28570640c2d7d66d38b9eaa848c3b9e3 20-Aug-2008 Yinghai Lu <yhlu.kernel@gmail.com> x86: put timer_rand_state pointer into irq_desc

irq_timer_state[] is a NR_IRQS sized array that is a side-by array to
the real irq_desc[] array.

Integrate that field into the (now dynamic) irq_desc dynamic array and
save some RAM.

v2: keep the old way to support arch not support irq_desc

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
eef1de76da54a2ab6c6659c9a3722fd54a0e3459 20-Aug-2008 Yinghai Lu <yhlu.kernel@gmail.com> irqs: make irq_timer_state to use dyn_array

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
1f45f5621df82033cb4964d03530ade2f9a25e7b 20-Aug-2008 Yinghai Lu <yhlu.kernel@gmail.com> drivers/char: use nr_irqs

convert them to nr_irqs.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
f331c0296f2a9fee0d396a70598b954062603015 03-Sep-2008 Tejun Heo <tj@kernel.org> block: don't depend on consecutive minor space

* Implement disk_devt() and part_devt() and use them to directly
access devt instead of computing it from ->major and ->first_minor.

Note that all references to ->major and ->first_minor outside of
block layer is used to determine devt of the disk (the part0) and as
->major and ->first_minor will continue to represent devt for the
disk, converting these users aren't strictly necessary. However,
convert them for consistency.

* Implement disk_max_parts() to avoid directly deferencing
genhd->minors.

* Update bdget_disk() such that it doesn't assume consecutive minor
space.

* Move devt computation from register_disk() to add_disk() and make it
the only one (all other usages use the initially determined value).

These changes clean up the code and will help disk->part dereference
fix and extended block device numbers.

Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
8b76f46a2db29407fed66cf4aca19d61b3dcb3e1 02-Sep-2008 Andrew Morton <akpm@linux-foundation.org> drivers/char/random.c: fix a race which can lead to a bogus BUG()

Fix a bug reported by and diagnosed by Aaron Straus.

This is a regression intruduced into 2.6.26 by

commit adc782dae6c4c0f6fb679a48a544cfbcd79ae3dc
Author: Matt Mackall <mpm@selenic.com>
Date: Tue Apr 29 01:03:07 2008 -0700

random: simplify and rename credit_entropy_store

credit_entropy_bits() does:

spin_lock_irqsave(&r->lock, flags);
...
if (r->entropy_count > r->poolinfo->POOLBITS)
r->entropy_count = r->poolinfo->POOLBITS;

so there is a time window in which this BUG_ON():

static size_t account(struct entropy_store *r, size_t nbytes, int min,
int reserved)
{
unsigned long flags;

BUG_ON(r->entropy_count > r->poolinfo->POOLBITS);

/* Hold lock while accounting */
spin_lock_irqsave(&r->lock, flags);

can trigger.

We could fix this by moving the assertion inside the lock, but it seems
safer and saner to revert to the old behaviour wherein
entropy_store.entropy_count at no time exceeds
entropy_store.poolinfo->POOLBITS.

Reported-by: Aaron Straus <aaron@merfinllc.com>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: <stable@kernel.org> [2.6.26.x]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
9f593653742d1dd816c4e94c6e5154a57ccba6d1 19-Aug-2008 Stephen Hemminger <shemminger@vyatta.com> nf_nat: use secure_ipv4_port_ephemeral() for NAT port randomization

Use incoming network tuple as seed for NAT port randomization.
This avoids concerns of leaking net_random() bits, and also gives better
port distribution. Don't have NAT server, compile tested only.

Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>

[ added missing EXPORT_SYMBOL_GPL ]

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
27ac792ca0b0a1e7e65f20342260650516c95864 24-Jul-2008 Andrea Righi <righi.andrea@gmail.com> PAGE_ALIGN(): correctly handle 64-bit values on 32-bit architectures

On 32-bit architectures PAGE_ALIGN() truncates 64-bit values to the 32-bit
boundary. For example:

u64 val = PAGE_ALIGN(size);

always returns a value < 4GB even if size is greater than 4GB.

The problem resides in PAGE_MASK definition (from include/asm-x86/page.h for
example):

#define PAGE_SHIFT 12
#define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE-1))
...
#define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)

The "~" is performed on a 32-bit value, so everything in "and" with
PAGE_MASK greater than 4GB will be truncated to the 32-bit boundary.
Using the ALIGN() macro seems to be the right way, because it uses
typeof(addr) for the mask.

Also move the PAGE_ALIGN() definitions out of include/asm-*/page.h in
include/linux/mm.h.

See also lkml discussion: http://lkml.org/lkml/2008/6/11/237

[akpm@linux-foundation.org: fix drivers/media/video/uvc/uvc_queue.c]
[akpm@linux-foundation.org: fix v850]
[akpm@linux-foundation.org: fix powerpc]
[akpm@linux-foundation.org: fix arm]
[akpm@linux-foundation.org: fix mips]
[akpm@linux-foundation.org: fix drivers/media/video/pvrusb2/pvrusb2-dvb.c]
[akpm@linux-foundation.org: fix drivers/mtd/maps/uclinux.c]
[akpm@linux-foundation.org: fix powerpc]
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
9a6f70bbed4e8b72dd340812d7c606bfd5e00b47 29-Apr-2008 Jeff Dike <jdike@addtoit.com> random: add async notification support to /dev/random

Add async notification support to /dev/random.

A little test case is below. Without this patch, you get:

$ ./async-random
Drained the pool
Found more randomness

With it, you get:

$ ./async-random
Drained the pool
SIGIO
Found more randomness

#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <errno.h>
#include <fcntl.h>

static void handler(int sig)
{
printf("SIGIO\n");
}

int main(int argc, char **argv)
{
int fd, n, err, flags;

if(signal(SIGIO, handler) < 0){
perror("setting SIGIO handler");
exit(1);
}

fd = open("/dev/random", O_RDONLY);
if(fd < 0){
perror("open");
exit(1);
}

flags = fcntl(fd, F_GETFL);
if (flags < 0){
perror("getting flags");
exit(1);
}

flags |= O_NONBLOCK;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}

while((err = read(fd, &n, sizeof(n))) > 0) ;

if(err == 0){
printf("random returned 0\n");
exit(1);
}
else if(errno != EAGAIN){
perror("read");
exit(1);
}

flags |= O_ASYNC;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}

if (fcntl(fd, F_SETOWN, getpid()) < 0) {
perror("Setting SIGIO");
exit(1);
}

printf("Drained the pool\n");
read(fd, &n, sizeof(n));
printf("Found more randomness\n");

return(0);
}

Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
adc782dae6c4c0f6fb679a48a544cfbcd79ae3dc 29-Apr-2008 Matt Mackall <mpm@selenic.com> random: simplify and rename credit_entropy_store

- emphasize bits in the name
- make zero bits lock-free
- simplify logic

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
e68e5b664ecb9bccf68102557107a6b6d739a97c 29-Apr-2008 Matt Mackall <mpm@selenic.com> random: make mixing interface byte-oriented

Switch add_entropy_words to a byte-oriented interface, eliminating numerous
casts and byte/word size rounding issues. This also reduces the overall
bit/byte/word confusion in this code.

We now mix a byte at a time into the word-based pool. This takes four times
as many iterations, but should be negligible compared to hashing overhead.
This also increases our pool churn, which adds some depth against some
theoretical failure modes.

The function name is changed to emphasize pool mixing and deemphasize entropy
(the samples mixed in may not contain any). extract is added to the core
function to make it clear that it extracts from the pool.

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
993ba2114c554c1561a018e5c63a771ec8e1c469 29-Apr-2008 Matt Mackall <mpm@selenic.com> random: simplify add_ptr logic

The add_ptr variable wasn't used in a sensible way, use only i instead.
i got reused later for a different purpose, use j instead.

While we're here, put tap0 first in the tap list and add a comment.

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
6d38b827400d7c02bce391f90d044e4c57d5bc1e 29-Apr-2008 Matt Mackall <mpm@selenic.com> random: remove some prefetch logic

The urandom output pool (ie the fast path) fits in one cacheline, so
this is pretty unnecessary. Further, the output path has already
fetched the entire pool to hash it before calling in here.

(This was the only user of prefetch_range in the kernel, and it passed
in words rather than bytes!)

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
feee76972bcc54b2b1d1dc28bc6c16a8daa9aff8 29-Apr-2008 Matt Mackall <mpm@selenic.com> random: eliminate redundant new_rotate variable

- eliminate new_rotate
- move input_rotate masking
- simplify input_rotate update
- move input_rotate update to end of inner loop for readability

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
433582093a9dc5454ba03b4a7ea201d85e6aa4de 29-Apr-2008 Matt Mackall <mpm@selenic.com> random: remove cacheline alignment for locks

Earlier changes greatly reduce the number of times we grab the lock
per output byte, so we shouldn't need this particular hack any more.

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1c0ad3d492adf670e47bf0a3d65c6ba5cdee0114 29-Apr-2008 Matt Mackall <mpm@selenic.com> random: make backtracking attacks harder

At each extraction, we change (poolbits / 16) + 32 bits in the pool,
or 96 bits in the case of the secondary pools. Thus, a brute-force
backtracking attack on the pool state is less difficult than breaking
the hash. In certain cases, this difficulty may be is reduced to 2^64
iterations.

Instead, hash the entire pool in one go, then feedback the whole hash
(160 bits) in one go. This will make backtracking at least as hard as
inverting the hash.

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
ffd8d3fa5813430fe3926fe950fde23630f6b1a0 29-Apr-2008 Matt Mackall <mpm@selenic.com> random: improve variable naming, clear extract buffer

- split the SHA variables apart into hash and workspace
- rename data to extract
- wipe extract and workspace after hashing

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
53c3f63e824764da23676e5c718755ff4aac9b63 29-Apr-2008 Matt Mackall <mpm@selenic.com> random: reuse rand_initialize

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
43ae4860ff4a358c29b9d364e45c2d09ad9fa067 29-Apr-2008 Matt Mackall <mpm@selenic.com> random: use unlocked_ioctl

No locking actually needed.

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
88c730da8c8b20fa732221725347bd9460842bac 29-Apr-2008 Matt Mackall <mpm@selenic.com> random: consolidate wakeup logic

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
90b75ee54666fe615ebcacfc8d8540b80afdedd5 29-Apr-2008 Matt Mackall <mpm@selenic.com> random: clean up checkpatch complaints

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
91f3f1e304f2e9ff2c8b9c76efd4fb8ff93110f7 06-Feb-2008 Matt Mackall <mpm@selenic.com> drivers/char/random.c:write_pool() cond_resched() needed

Reduce latency for large writes to /dev/[u]random

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Sami Farin <safari-kernel@safari.iki.fi>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
640e248e44e2c550473550ca83668ceccad21dce 30-Jan-2008 Adrian Bunk <bunk@kernel.org> unexport add_disk_randomness

This patch removes the no longer used EXPORT_SYMBOL(add_disk_randomness).

Signed-off-by: Adrian Bunk <bunk@kernel.org>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
6dd10a62353a50b30b30e0c18653650975b29c71 14-Nov-2007 Eric Dumazet <dada1@cosmosbay.com> [NET] random : secure_tcp_sequence_number should not assume CONFIG_KTIME_SCALAR

All 32 bits machines but i386 dont have CONFIG_KTIME_SCALAR. On these
machines, ktime.tv64 is more than 4 times the (correct) result given
by ktime_to_ns()

Again on these machines, using ktime_get_real().tv64 >> 6 give a
32bits rollover every 64 seconds, which is not wanted (less than the
120 s MSL)

Using ktime_to_ns() is the portable way to get nsecs from a ktime, and
have correct code.

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
c80544dc0b87bb65038355e7aafdc30be16b26ab 18-Oct-2007 Stephen Hemminger <shemminger@linux-foundation.org> sparse pointer use of zero as null

Get rid of sparse related warnings from places that use integer as NULL
pointer.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Cc: Andi Kleen <ak@suse.de>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Ian Kent <raven@themaw.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Davide Libenzi <davidel@xmailserver.org>
Cc: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
9b42c336d06411e6463949d2dac63949f66ff70b 01-Oct-2007 Eric Dumazet <dada1@cosmosbay.com> [TCP]: secure_tcp_sequence_number() should not use a too fast clock

TCP V4 sequence numbers are 32bits, and RFC 793 assumed a 250 KHz clock.
In order to follow network speed increase, we can use a faster clock, but
we should limit this clock so that the delay between two rollovers is
greater than MSL (TCP Maximum Segment Lifetime : 2 minutes)

Choosing a 64 nsec clock should be OK, since the rollovers occur every
274 seconds.

Problem spotted by Denys Fedoryshchenko

[ This bug was introduced by f85958151900f9d30fa5ff941b0ce71eaa45a7de ]

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5a021e9ffd56c22700133ebc37d607f95be8f7bd 19-Jul-2007 Matt Mackall <mpm@selenic.com> random: fix bound check ordering (CVE-2007-3105)

If root raised the default wakeup threshold over the size of the
output pool, the pool transfer function could overflow the stack with
RNG bytes, causing a DoS or potential privilege escalation.

(Bug reported by the PaX Team <pageexec@freemail.hu>)

Cc: Theodore Tso <tytso@mit.edu>
Cc: Willy Tarreau <w@1wt.eu>
Signed-off-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
679ce0ace6b1a07043bc3b405a34ddccad808886 16-Jun-2007 Matt Mackall <mpm@selenic.com> random: fix output buffer folding

(As reported by linux@horizon.com)

Folding is done to minimize the theoretical possibility of systematic
weakness in the particular bits of the SHA1 hash output. The result of
this bug is that 16 out of 80 bits are un-folded. Without a major new
vulnerability being found in SHA1, this is harmless, but still worth
fixing.

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: <linux@horizon.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
7f397dcdb78d699a20d96bfcfb595a2411a5bbd2 30-May-2007 Matt Mackall <mpm@selenic.com> random: fix seeding with zero entropy

Add data from zero-entropy random_writes directly to output pools to
avoid accounting difficulties on machines without entropy sources.

Tested on lguest with all entropy sources disabled.

Signed-off-by: Matt Mackall <mpm@selenic.com>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
602b6aeefe8932dd8bb15014e8fe6bb25d736361 30-May-2007 Matt Mackall <mpm@selenic.com> random: fix error in entropy extraction

Fix cast error in entropy extraction.
Add comments explaining the magic 16.
Remove extra confusing loop variable.

Signed-off-by: Matt Mackall <mpm@selenic.com>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
f85958151900f9d30fa5ff941b0ce71eaa45a7de 28-Mar-2007 Eric Dumazet <dada1@cosmosbay.com> [NET]: random functions can use nsec resolution instead of usec

In order to get more randomness for secure_tcpv6_sequence_number(),
secure_tcp_sequence_number(), secure_dccp_sequence_number() functions,
we can use the high resolution time services, providing nanosec
resolution.

I've also done two kmalloc()/kzalloc() conversions.

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Acked-by: James Morris <jmorris@namei.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
cb69cc52364690d7789940c480b3a9490784b680 08-Mar-2007 Adrian Bunk <bunk@stusta.de> [TCP/DCCP/RANDOM]: Remove unused exports.

This patch removes the following not or no longer used exports:
- drivers/char/random.c: secure_tcp_sequence_number
- net/dccp/options.c: sysctl_dccp_feat_sequence_window
- net/netlink/af_netlink.c: netlink_set_err

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2b8693c0617e972fc0b2fd1ebf8de97e15b656c3 12-Feb-2007 Arjan van de Ven <arjan@linux.intel.com> [PATCH] mark struct file_operations const 3

Many struct file_operations in the kernel can be "const". Marking them const
moves these to the .rodata section, which avoids false sharing with potential
dirty data. In addition it'll catch accidental writes at compile time to
these shared resources.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1f29bcd739972f71f2fd5d5d265daf3e1208fa5e 10-Dec-2006 Alexey Dobriyan <adobriyan@gmail.com> [PATCH] sysctl: remove unused "context" param

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Andi Kleen <ak@suse.de>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: David Howells <dhowells@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
a7113a966241b700aecc7b8cb326cecb62e3c4b2 08-Dec-2006 Josef Sipek <jsipek@fsl.cs.sunysb.edu> [PATCH] struct path: convert char-drivers

Signed-off-by: Josef Sipek <jsipek@fsl.cs.sunysb.edu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
b09b845ca6724c3bbdc00c0cb2313258c7189ca9 15-Nov-2006 Al Viro <viro@zeniv.linux.org.uk> [RANDOM]: Annotate random.h IP helpers.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
65f27f38446e1976cc98fd3004b110fedcddd189 22-Nov-2006 David Howells <dhowells@redhat.com> WorkStruct: Pass the work_struct pointer instead of context data

Pass the work_struct pointer to the work function rather than context data.
The work function can use container_of() to work out the data.

For the cases where the container of the work_struct may go away the moment the
pending bit is cleared, it is made possible to defer the release of the
structure by deferring the clearing of the pending bit.

To make this work, an extra flag is introduced into the management side of the
work_struct. This governs auto-release of the structure upon execution.

Ordinarily, the work queue executor would release the work_struct for further
scheduling or deallocation by clearing the pending bit prior to jumping to the
work function. This means that, unless the driver makes some guarantee itself
that the work_struct won't go away, the work function may not access anything
else in the work_struct or its container lest they be deallocated.. This is a
problem if the auxiliary data is taken away (as done by the last patch).

However, if the pending bit is *not* cleared before jumping to the work
function, then the work function *may* access the work_struct and its container
with no problems. But then the work function must itself release the
work_struct by calling work_release().

In most cases, automatic release is fine, so this is the default. Special
initiators exist for the non-auto-release case (ending in _NAR).


Signed-Off-By: David Howells <dhowells@redhat.com>
52bad64d95bd89e08c49ec5a071fa6dcbe5a1a9c 22-Nov-2006 David Howells <dhowells@redhat.com> WorkStruct: Separate delayable and non-delayable events.

Separate delayable work items from non-delayable work items be splitting them
into a separate structure (delayed_work), which incorporates a work_struct and
the timer_list removed from work_struct.

The work_struct struct is huge, and this limits it's usefulness. On a 64-bit
architecture it's nearly 100 bytes in size. This reduces that by half for the
non-delayable type of event.

Signed-Off-By: David Howells <dhowells@redhat.com>
80fc9f532d8c05d4cb12d55660624ce53a378349 11-Oct-2006 Dmitry Torokhov <dtor@insightbb.com> Input: add missing exports to fix modular build

Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
e9ff3990f08e9a0c2839cc22808b01732ea5b3e4 02-Oct-2006 Serge E. Hallyn <serue@us.ibm.com> [PATCH] namespaces: utsname: switch to using uts namespaces

Replace references to system_utsname to the per-process uts namespace
where appropriate. This includes things like uname.

Changes: Per Eric Biederman's comments, use the per-process uts namespace
for ELF_PLATFORM, sunrpc, and parts of net/ipv4/ipconfig.c

[jdike@addtoit.com: UML fix]
[clg@fr.ibm.com: cleanup]
[akpm@osdl.org: build fix]
Signed-off-by: Serge E. Hallyn <serue@us.ibm.com>
Cc: Kirill Korotaev <dev@openvz.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Herbert Poetzl <herbert@13thfloor.at>
Cc: Andrey Savochkin <saw@sw.ru>
Signed-off-by: Cedric Le Goater <clg@fr.ibm.com>
Cc: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
9361401eb7619c033e2394e4f9f6d410d6719ac7 30-Sep-2006 David Howells <dhowells@redhat.com> [PATCH] BLOCK: Make it possible to disable the block layer [try #6]

Make it possible to disable the block layer. Not all embedded devices require
it, some can make do with just JFFS2, NFS, ramfs, etc - none of which require
the block layer to be present.

This patch does the following:

(*) Introduces CONFIG_BLOCK to disable the block layer, buffering and blockdev
support.

(*) Adds dependencies on CONFIG_BLOCK to any configuration item that controls
an item that uses the block layer. This includes:

(*) Block I/O tracing.

(*) Disk partition code.

(*) All filesystems that are block based, eg: Ext3, ReiserFS, ISOFS.

(*) The SCSI layer. As far as I can tell, even SCSI chardevs use the
block layer to do scheduling. Some drivers that use SCSI facilities -
such as USB storage - end up disabled indirectly from this.

(*) Various block-based device drivers, such as IDE and the old CDROM
drivers.

(*) MTD blockdev handling and FTL.

(*) JFFS - which uses set_bdev_super(), something it could avoid doing by
taking a leaf out of JFFS2's book.

(*) Makes most of the contents of linux/blkdev.h, linux/buffer_head.h and
linux/elevator.h contingent on CONFIG_BLOCK being set. sector_div() is,
however, still used in places, and so is still available.

(*) Also made contingent are the contents of linux/mpage.h, linux/genhd.h and
parts of linux/fs.h.

(*) Makes a number of files in fs/ contingent on CONFIG_BLOCK.

(*) Makes mm/bounce.c (bounce buffering) contingent on CONFIG_BLOCK.

(*) set_page_dirty() doesn't call __set_page_dirty_buffers() if CONFIG_BLOCK
is not enabled.

(*) fs/no-block.c is created to hold out-of-line stubs and things that are
required when CONFIG_BLOCK is not set:

(*) Default blockdev file operations (to give error ENODEV on opening).

(*) Makes some /proc changes:

(*) /proc/devices does not list any blockdevs.

(*) /proc/diskstats and /proc/partitions are contingent on CONFIG_BLOCK.

(*) Makes some compat ioctl handling contingent on CONFIG_BLOCK.

(*) If CONFIG_BLOCK is not defined, makes sys_quotactl() return -ENODEV if
given command other than Q_SYNC or if a special device is specified.

(*) In init/do_mounts.c, no reference is made to the blockdev routines if
CONFIG_BLOCK is not defined. This does not prohibit NFS roots or JFFS2.

(*) The bdflush, ioprio_set and ioprio_get syscalls can now be absent (return
error ENOSYS by way of cond_syscall if so).

(*) The seclvl_bd_claim() and seclvl_bd_release() security calls do nothing if
CONFIG_BLOCK is not set, since they can't then happen.

Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
e4d919188554a77c798a267e098059bc9aa39726 03-Jul-2006 Ingo Molnar <mingo@elte.hu> [PATCH] lockdep: locking init debugging improvement

Locking init improvement:

- introduce and use __SPIN_LOCK_UNLOCKED for array initializations,
to pass in the name string of locks, used by debugging

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
6ab3d5624e172c553004ecc862bfeac16d9d68b7 30-Jun-2006 Jörn Engel <joern@wohnheim.fh-wedel.de> Remove obsolete #include <linux/config.h>

Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
30aaa154fc21ad1ee4400e28009732a04a80862f 10-Apr-2006 Adrian Bunk <bunk@stusta.de> [IPV6]: Unexport secure_ipv6_port_ephemeral

This patch removes the unused EXPORT_SYMBOL(secure_ipv6_port_ephemeral).

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
d251575ab60ca2b5337574bfaf8f8b583f18059e 11-Jan-2006 Stephen Hemminger <shemminger@osdl.org> [PATCH] random: get rid of sparse warning

Get rid of bogus extern attribute that causes sparse warning.

Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
d8313f5ca2b1f86b7df6c99fc4b3fffa1f84e92b 14-Dec-2005 Arnaldo Carvalho de Melo <acme@mandriva.com> [INET6]: Generalise tcp_v6_hash_connect

Renaming it to inet6_hash_connect, making it possible to ditch
dccp_v6_hash_connect and share the same code with TCP instead.

Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
a7f5e7f164788a22eb5d3de8e2d3cee1bf58fdca 14-Dec-2005 Arnaldo Carvalho de Melo <acme@mandriva.com> [INET]: Generalise tcp_v4_hash_connect

Renaming it to inet_hash_connect, making it possible to ditch
dccp_v4_hash_connect and share the same code with TCP instead.

Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
c4365c9235f80128c3c3d5993074173941b1c1f0 10-Aug-2005 Arnaldo Carvalho de Melo <acme@ghostprotocols.net> [RANDOM]: Introduce secure_dccp_sequence_number

Code contributed by Stephen Hemminger.

Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
6c036527a630720063b67d9a65455e8caca2c8fa 08-Jul-2005 Christoph Lameter <christoph@lameter.com> [PATCH] mostly_read data section

Add a new section called ".data.read_mostly" for data items that are read
frequently and rarely written to like cpumaps etc.

If these maps are placed in the .data section then these frequenly read
items may end up in cachelines with data is is frequently updated. In that
case all processors in an SMP system must needlessly reload the cachelines
again and again containing elements of those frequently used variables.

The ability to share these cachelines will allow each cpu in an SMP system
to keep local copies of those shared cachelines thereby optimizing
performance.

Signed-off-by: Alok N Kataria <alokk@calsoftinc.com>
Signed-off-by: Shobhit Dayal <shobhit@calsoftinc.com>
Signed-off-by: Christoph Lameter <christoph@scalex86.org>
Signed-off-by: Shai Fultheim <shai@scalex86.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
9e95ce279fa611226a1ab0dff1c237c080b51b60 17-Apr-2005 Matt Mackall <mpm@selenic.com> [PATCH] update maintainer for /dev/random

Ted has agreed to let me take over as maintainer of /dev/random and
friends. I've gone ahead and added a line to his entry in CREDITS.

Signed-off-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 17-Apr-2005 Linus Torvalds <torvalds@ppc970.osdl.org> Linux-2.6.12-rc2

Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!