As the name of class indicates and Sergey Levine makes clear in the first lecture, this course is concerned with deep RL. While a lot of material intersects with CS234, it is generally more DL-oriented (e.g. the discussed examples). It also goes into topics such as recent algorithmic questions and open problems in deep RL.
The course spends is less centered around deep RL than CS285 (although the line between deep/non-deep can be quite blurry). The lecture notes are very extensive and the course site also contains a midterm review. Both courses are very well-taught, so the choice between these two courses comes down to personal preference.
This 4-part MOOC is the most approachable RL course online. Topics include Markov Decision Processes, Dynamic Programming, Temporal Difference Learning and Policy Gradients.
This course consists of half lectures and half paper reviews. Pascal Poupart does a phenomenal job in this course walking through theoretical concepts in a really approachable manner.
This well-known RL course by David Silver goes into the nuts and bolts of RL from a time where Deep RL was still in its very infancy. The topics are explained more mathematically rigorously than in the UAlberta course, but lack the quizzes and coding practice that the MOOC offers.