As someone with even a cursory interest in researching the Bible, you’ve probably come across John Dyer’s little website BestCommentaries.com (BC). BC attempts to list and rank commentaries on the Bible in order “to help students at all levels to make good, informed decisions about which commentaries they should purchase.” Dyer knows that “scores and ratings alone” can’t tell the whole story, but he hopes “the combined resources available through this site points [students or pastors] in the right direction.”
In reality, BC points (1) male students or pastors in the (2) American, (3) Evangelical, (4) reformed, (5) conservative direction.
A few caveats. BC’s current state is not entirely Dyer’s fault since ratings are crowd-sourced, and it may not be what he originally envisioned. It also should be noted that there is some value in doing exactly what BC does now for various reasons, purposes, and audiences. Dyer’s website is also as far as I can tell the best database of somewhat current and upcoming English commentaries. The labels technical, pastoral, etc are also generally helpful. It also gives a good idea of the types of commentaries conservative Americans tend to prioritize, and it is certainly helpful when looking for a comprehensive list of commentaries on a particular book even if the rating system is flawed.
I should briefly also note how I think commentaries might be ranked or, better, analyzed. In general, I think commentaries should be ranked according to how well they perform their five primary jobs: (1) give the reader a better historical understanding of the world in which the text was produced and (2) in which the first audiences resided; (3) interact substantially with the original text; (4) give some attention to what the text might mean to modern readers (e.g., through reception history or theological reflection); (5) add something new either through new evidence or new methods. All commentaries attempt to do these jobs. For example, even the most devotional of commentaries will highlight the historical nature of a particular passage.
So what’s wrong with BC?
I’m glad you asked. I have four major problems with the site.
1. The methodology is flawed.
The methodology is straight-forward. The average rating from users, journals, and featured reviewers is complied (x). The amount of reviews (y) is compiled with more weight given to a reviewer who gives ratings for more works. In other words, if someone reviews 30 books, their reviews of those books counts more than someone who reviews just one book. Finally, users can create a library on the site and include whatever books they desire. The amount of times a given commentary appears in users’ libraries (z) is added to the ratings. So, a commentary’s score = x + (y/10) + (z/10). Carson’s John commentary has the highest score (14.91) which is converted too a 100 and other commentaries are averaged down from that.
We are told the methodology weights academic reviews of works that many people may not own, but it is not clear how this works or what sources are considered academic. For example, is RBL considered more academic than Carson’s list? Is John Piper more academic than Jeremy Pierce (featured reviewer, PhD, Syracuse), and how is that determined?
Since this methodology relies on the popularity of certain commentaries, the measurements are really showing us which commentaries most people surveyed are using. As shown below, the surveyed subjects are largely white males who are reformed, conservative, American, and evangelical.
Best commentaries isn’t showing readers which commentary is best, but which commentaries are popular among white American evangelical men with a large bent towards Reformed traditions.
The site’s name instructs the reader they will be getting rankings of the best commentaries, not the most popular ones in certain circles.
To highlight this flawed methodology further, we need to look at who these featured reviewers are, when they reviewed these commentators, and for whom they reviewed them.
2. Featured reviewers are homogeneous, dated, and write for specific audiences.
To show what I mean, I’ve analyzed here the featured reviewers which are given more weight in the ratings. Here we can see this algorithm at work. By homogeneous, I mean largely white American, male, evangelical, conservative, and reformed. By dated, I mean simply that the reviews listed are not recent and so they cannot rate highly newer commentaries.
First, nearly all reviews are given 5 stars. I assume Dyer’s reasoning is that only X number of commentaries are recommended, therefore they should be counted as a 5 star rating. In reality, this actually gives a ton of weight to the featured reviews. Not only is the commentary given extra weight because it is reviewed at all, but it is given 5 stars which bumps it to the top of the list. Very rarely is any other rating given.
Second, nearly all featured reviewers are reformed evangelicals who are recommending commentaries for (reformed evangelical) pastors. Though some cursory attention is given to non-American commentaries, the vast majority recommend the same commentaries but, and this is key, not because those commentaries are the best, but because those commentaries are what they (as American evangelicals etc) use.
I went through all of the featured reviewers on the BC website, gathered their biographies, and listed them below. To give you a feel for the reviewers, I’ve created this table and give more information about the reviewers below the table. If the recommendations are pulled from a book or blog, I listed the publication dates.
|D.A. Carson (2007)||TEDS, TGC||Theological students, ministers|
|Derek Thomas (2006)||RTS, ARC/PCA, Ligonier||Ministers|
|Matt Perman (2006)||Desiring God, Southern (MDiv)||Ministers/students|
|Fred Sanders||Biola, ETS||N/A|
|Kostenberger/Patterson (2011)||SEBTS/Liberty||Serious students of Scripture|
|Jim Rosscup (2003)||Master’s Seminary||Pastors (see note)|
|Joel Green (2015)||Fuller (UMC)||Students, pastors|
|Joel Beeke||Puritan Reformed TS||N/A|
|John Dyer||DTS||Students, pastors|
|John Glynn (2007)||ETS||Students and pastors|
|Keith Mathison (~2008)||Ligonier, Reformation BC.||Pastors, students.|
|Philip J Long (2012)||Grace Bible College||N/A|
|Robert Bowman Jr||Apologetics, ex-NAMB||N/A|
|Scot McKnight||Northern Seminary||Blog audience|
|Tim Challies (2013)||Pastor, blogger, TGC||Blog Audience|
|Tremper Longman III (2007)||Westmont, Fuller, ETS||Pastors, students (see note)|
We can see a few details from this table. First, I assume that these featured reviewers are weighted the same, meaning there is no difference between Carson and Chailles. Second, the majority of these reviews are over ten years old, and do not consider newer English commentaries. Third, the vast majority of reviewers are American, evangelical, and conservative. Many are also reformed; every reviewer is male, and most are white.
The algorithm displays a type of circular reasoning as seen in the graph above. The (American, evangelical, etc) reviewers commend commentaries that are popular in their own circles or commentaries that fit best within their audience’s use (e.g., Reformed churches) which, because they are recommended at all, ranks them highly on BC. Therefore, the best commentaries are those used by select reviewers who use commentaries for their own purposes (preaching, etc). Because most of the recommendations are for Reformed, evangelical pastors, the commentaries approved are for that audience. Some reviewers like Rosscup specifically commend those commentaries that he deems “show belief in the reliability of biblical statements” which means he recommends conservative commentaries over others.
Best Commentaries is less a ranking of “best commentaries” and more a ranking of commentaries used by American evangelicals, weighted heavily toward Reformed evangelicals.
To show this methodology at work, let’s look at the commentaries on John as an example. Köstenberger’s John commentary which relies far too heavily on Carson’s John commentary is rated second overall which, though it did not receive great reviews outside evangelical circles, is expected. At the very bottom of the rankings, we find Bultmann who wrote the most influential commentary ever written on John. Of course, we shouldn’t expect Bultmann’s commentary to be “the best” commentary on John since it is outdated and uses a questionable methodology, but it changed the face of Johannine studies and deserves a higher ranking than dead last!
As an example of American-centric ratings, we find Gary Burge’s NIV Application Commentary is ranked over Andrew Lincoln’s commentary. William Hendricksen’s John commentary is very highly ranked despite being sorely outdated. Richard Burridge’s little commentary (Daily Bible) is not listed at all.
Again, the popularity of certain commentaries within theological circles is what is being measured here.
3. BC prioritizes non-specialist reviewers for specialist works.
Other reviews are anyone who can register and write a short review. Though on BC’s own users count in the ratings, other reviews from CBD, Amazon, and Goodreads included on the commentary’s individual page. BC’s own users may or may not have any credentials or give any real reason for their ratings, and it quickly becomes the Yelp of commentaries. Looking at Craddock’s Luke commentary (which I highly recommend) we find two reviews, one positive and one negative. Because only two people rate Craddock, it falls to the bottom of the ranked ratings. James R. Edwards’s Luke commentary is given 5 stars from a BC reviewer who says only “A great addition to the Pillar series. A fine and well written commentary on the Gospel of Luke.” Unless one is a Lukan scholar or owns the book, you’d never know that Edwards’ commentary assumes Luke uses a Hebrew source as his major source (pp.15-16) which is highly controversial.
The way the rating metrics work is that any review by a BC user is given some weight. Again, this quickly becomes a race to pick your favorite commentaries with or without any real reasons aside from “X recommended it.”
This is especially the case with works that featured reviewers or journals do not review. Take, for example, Kenneth Gangel’s John commentary (Holman). It appears in 4 users’ libraries and is given 5 stars from RBL even though the RBL review states clearly that it is not recommended: “Let us be absolutely clear at the outset by saying that this book will be of absolutely no use to the serious scholar of John—and be of only slightly more use to the lay reader seeking encouragement in her or his faith… Although this book purports to be for an adult Christian believer seeking information about her or his basic confessional documents, the gospels, it succeeds only in patronizing its audience with pabulum of quasi-pious platitudes, banal exegesis (if one may so praise this prose) and inane illustrations. In short, this volume is a condescending and, indeed, dishonest presentation of what the author purports to be the theological message of the Gospel of John, cut loose from the context of both academic scholarship of the gospel and the vast body of theological exegesis of the gospel. We can in no terms recommend this volume to the reader. ”
But BC’s methodology gives a 5 star review to the commentary because it was reviewed at all. Because of its review and because it is in some users’ libraries, we find that Gangel’s commentary is ranked above, for example, Andrew Lincoln’s commentary which was reviewed highly (but critically) by Craig Keener.
Similarly, the RBL review for Bock’s Luke commentary concludes: “No one can complain that this commentary does other than what it set out to do. And although it will be appreciated by its intended, evangelical audience, even these readers will have to look elsewhere to get the full story.” Hardly a 5 star review!
Since the weight of users’ libraries seems to outweigh critical reviews, we are left with a ranking that prioritizes popularity rather than merit, but only popularity within the circles detailed above.
4. BC prioritizes American, evangelical works
Because the weighted reviewers are American evangelicals, they recommend largely the American evangelical commentaries. This is neither bad nor good, but we should be open and honest about the rating system.
Let’s look at Luke again. Chuck Swindoll and John MacArthur are listed as higher than Judith Lieu, F. Scott Spencer, Justo González, Graham Twelftree, and Norvall Geldenhuys. In no measurable way are Swindoll and MacArthur’s commentaries “better” than those world-class scholars. Michael Wolter isn’t even listed though his commentary has received rave reviews (now in English since last year). In reality, Swindoll and MacArthur are simply more popular within the American conservative traditions. The algorithms calculates the ranking based on popularity rather than merit while also privileging American evangelicals who may or may not be specialists.
Commentaries should be ranked on their merits and contributions rather than their popularity. BC ranks commentaries based on their popularity within American evangelicalism, particularly conservative Reformed circles and, as such, does not shed any light on the “best” commentaries.
The website BC succeeds in ranking commentaries, but only in so far as they are ranked according to American evangelicals. As long as one realizes the biases within the website, one can use the website with some benefit. Dyer’s website succeeds in hosting a website with aggregated reviews of varying helpfulness. Being able to sort by year is also very helpful.
In reality, Biblical scholars desperately need a new website which only pulls reviews from journals like RBL, JSNT, etc and academic bloggers who are in some position to recommend commentaries based on what contributions they make to scholarship. Ideally, each entry should read as an annotated bibliography, containing 1-3 important points one should know about the commentary. To help fill the giant lacuna currently represented on BC, representation from minority groups, women, and mainline denominations would be especially welcome.
The question remains: should you use it? If you want to know the popular commentaries within the above circles, yes. If you want to see a reasonably up-to-date selection of works on a book/topic, then yes. Should you consult it if you want to know the “best” commentaries based on their merit and contributions? Absolutely not.