Pull io_uring fixes from Jens Axboe:
- Removal of now unused busy wqe list (Hillf)
- Add cond_resched() to io-wq work processing (Hillf)
- And then the series that I hinted at from last week, which removes
the sqe from the io_kiocb and keeps all sqe handling on the prep
side. This guarantees that an opcode can't do the wrong thing and
read the sqe more than once. This is unchanged from last week, no
issues have been observed with this in testing. Hence I really think
we should fold this into 5.5.
* tag 'io_uring-5.5-
20191226' of git://git.kernel.dk/linux-block:
io-wq: add cond_resched() to worker thread
io-wq: remove unused busy list from io_sqe
io_uring: pass in 'sqe' to the prep handlers
io_uring: standardize the prep methods
io_uring: read 'count' for IORING_OP_TIMEOUT in prep handler
io_uring: move all prep state for IORING_OP_{SEND,RECV}_MGS to prep handler
io_uring: move all prep state for IORING_OP_CONNECT to prep handler
io_uring: add and use struct io_rw for read/writes
io_uring: use u64_to_user_ptr() consistently