« Professor and Director of the Center for Negotiation and Dispute Resolution, UC Hastings | Main | Newell's Law Review Meta-Rankings 2021 »
Tuesday, September 28, 2021
A Fair and Inclusive Alternative to the Sisk Academic Impact Rankings
The following guest post is by Matthew Sag (Loyola-Chicago). This post is a short version of this new essay.
The Sisk Rankings of the academic impact of law school faculties have been around for a while now. Gregory Sisk and his team release these rankings of the top 67 or so schools every three years. And so every three years I find myself wondering: “Really? Can it be true that all these schools have higher academic impact scores than Loyola Chicago, DePaul, and Houston Law?”
The short answer is: no, it’s not remotely true. There are quite a few schools that Sisk leaves out who would outrank those he includes on almost any conceivable method of aggregating citation counts.How do I know this?
When Sisk and his coauthors released their new rankings last month I spent some time digging around in the citation data available on HeinOnline. As I explain more fully in this essay, I used the data provided by HeinOnline to construct a rankings table that includes every ABA accredited law school.
My rankings are based on the median of doctrinal faculty—this is the obvious place to start if we are trying to understand the central tendency of a group with a skewed distribution. Sisk uses a slightly odd formula of twice the value of the mean plus the median, but not much turns on this. Even if we adopt Sisk's formula and apply it to the HeinOnline data, schools like Penn State, Loyola Chicago, DePaul, Houston Law, and Michigan State still outrank several of the faculties Sisk counts in the top 67. In the essay I have just posted to SSRN, I provide a complete ranking of schools from 1-193 calculated six different ways: median, mean, median+mean, mean*2+median, total, and rank_total+rank_median. I think median makes the most sense, but readers should feel free to rationalize whichever measure ranks their school higher. The point is that my claim that the Sisk rankings are unfair does not depend on the minutia of calculation. No matter how you crunch the numbers, several schools that Sisk and his team ignore outperform the ones he chooses to rank.
How significant are these distortions?
I have constructed a couple of figures to illustrate the differences between the Sisk rankings and my more inclusive approach. The first figure illustrates the difference between Sisk rankings and a simple five-year median citation ranking for schools that are underrated by Sisk. I have assigned each school disregarded by Sisk and implied Sisk rank of 68 for this purpose. (This figure also includes schools that rank the same either way.) The second figure is the same, except that it shows which schools are overrated by Sisk.
Who should be left out?
The Sisk rankings exclude the majority of ABA accredited law schools, including several that outperform many of those ranked by Sisk, and also every law school based at a Historically Black College or University (HBCU). This exclusionary approach to ranking schools is unfair and unnecessary. It is unfair because it falsely implies that certain disfavored or overlooked schools are inferior to those deemed worth ranking. Moreover, even the exclusion of schools that don’t outrank Sisk’s preferred schools once the playing field has been leveled is also unfair. It suggests that the overlooked schools are not even in the same league as those that are ranked, rather than being separated by matters of degree.
This unfairness is unnecessary. I know the HeinOnline data is not perfect, but I suspect it is at least as good as the data Sisk and his team extract from Westlaw. The means and medians I calculated using the HeinOnline data correlate with Sisk’s results at about .95, at least for the 67 schools we both ranked.
When I run the Chicago Marathon in a couple of weeks, I will be running the same race as two-time Olympic medalist Galen Rupp and America’s second fastest female marathon runner ever, Sara Hall. I don’t expect to finish anywhere near these remarkable athletes, but I do expect that my time will be recorded. No doubt, there are runners who believe that they will finish faster than me, but we don’t start the race presuming that some people’s times are worth recording and others are not. We all run, we all count. There is no reason why law school rankings should be any less fair or inclusive.
Posted by Howard Wasserman on September 28, 2021 at 09:31 AM in Life of Law Schools, Teaching Law | Permalink
Comments
Nice site. Keep up the great work Cooper Class Action
Posted by: Kenzo Madrigal | Aug 14, 2022 9:08:55 PM
Very efficiently written information. It will be beneficial to anybody who utilizes it, including me. Keep up the good work. For sure i will check out more posts. This site seems to get a good amount of visitors.
Posted by: seo analize | Jul 21, 2022 11:57:51 PM
First of all, thank you for your post. Your posts are neatly organized with the information I want, so there are plenty of resources to reference. I bookmark this site and will find your posts frequently in the future. Thanks again
Posted by: geles kaune | Jul 21, 2022 11:56:27 PM
thanks for the post keep sharing.
Posted by: interneto svetainiu kurimas | Jul 19, 2022 9:13:00 PM
Like the post which you shared here and thanks for sharing this with us.
Posted by: jung as500 | Jul 14, 2022 10:59:38 PM
wow... what a great blog, this writter who wrote this article it's realy a great blogger, this article so inspiring me to be a better person
Posted by: automobiliu dazymas | Jul 14, 2022 10:52:25 PM
This Article is Awesome. It’s help me a lot. Sir,Please keep up your good work. We always with you and Waiting for your new interesting articles.
Posted by: Northampton fencing | Jul 14, 2022 10:50:28 PM
your article is valuable for me and for others. Thanks for sharing your information!
Posted by: stelazai | Jul 14, 2022 10:48:36 PM
Great post hoping to read more!
Ashford Tree Surgery
Posted by: Ashford Tree Care | Dec 15, 2021 9:41:45 AM
This is very informative. Great details! Resin driveways
Posted by: Morgan H | Nov 29, 2021 6:03:34 AM
Contact me by email, Professor Sag, and I'm sure we can work on an arrangement to share faculty rosters confidentially. I do think the most sensible way to account for the significant departures is to look at those schools with the most significant departures. But if he would prefer to choose schools at random, I'm fine with that -- but let's choose them from his and my top 25 so we can narrow in on the discrepancy.
Posted by: Greg Sisk | Oct 11, 2021 9:10:59 AM
I would like to thank Professor Sisk for his thoughtful comments. I believe he wrong about most of them, but I would need access to the data to know one way or another. So I repeat my invitation to Professor Sisk with the clarification that I will agree to any reasonable terms of confidentiality he like to suggest.
"I would like address the contention that my faculty lists are inaccurate. For that purpose, I would like to invite Professor Sisk to share his the underlying data for the schools he ranks 7, 23, 25, 28, 30, 35, 40, 54, 66, 67 (10 numbers I chose randomly using the excel randbetween function). I would be very interested to understand the differences between our approaches, particularly whether different faculty lists have a meaningful impact on the ordinal ranking of faculties."
I do not believe that reviewing the data on "Berkeley, Michigan, and Northwestern" would be sufficient. Picking 10 schools at random makes more sense. (Ties should be resolved by alphabetical order.)
Regards,
Matt
Posted by: Matthew Sag | Oct 4, 2021 12:57:53 PM
First, I have a pretty good guess where to locate the divergence in Professor Sag’s approach, in terms of comparing his results with our Leiter-Sisk Scholarly Impact Ranking.
Some of the difference may lie in the databases, where he uses HeinOnline and we use Westlaw. And some difference might be found in his selection of median only, whereas we provide a broader dimension with mean as well which gives full credit to star faculty (although, to be fair, Professor Sag ranks alternatively with the mean as well).
But the biggest source of differential is likely to be Professor Sag’s inclusion of untenured faculty, whereas we in the Sisk-Leiter approach carefully identify and include only tenured faculty.
My strongest clue to that probable source of divergence lies in the results for my own institution, the University of St. Thomas in Minnesota. Now I am very proud of my colleagues, and I firmly believe we are rightly categorized in the top 25 law schools, just as is confirmed by our #23 ranking in our Scholarly Impact Ranking. But Professor Sag elevates the University of St. Thomas to #11 by his method. As much as I love seeing our institution characterized as nearly a top ten law school, I cannot in good conscience and informed reason accept that assertion. Through my work on scholarly impact over the last decade, I’ve seen in detail the accomplishments and citations of top ten law schools. And St. Thomas is not there – yet.
But I know something about my St. Thomas faculty that likely accounts for why it bounces up to #11 for Professor Sag. At this point in our history, our doctrinal faculty is all tenured, with no junior faculty (although that’s about to change). For that reason, all of our faculty whose citations are being counted have had the luxury of time to develop a scholarly portfolio and for others to discover and cite to their work. By contrast, other law schools in the top 25 have a mix of junior and senior faculty. And when Professor Sag adds untenured faculty to the rosters of those schools, it likely has the effect of diluting medians (and means) and thus distorting the results. It is comparing apples and oranges to have a roster of only tenured faculty at our school being compared to a roster of other law schools through which Professor Sag includes a number of junior faculty who generally have not had enough experience to begin to make a mark by citation counts.
While we cannot publicly share rosters that schools have asked to be confidential, I could share a couple with Professor Sag if he promises not to divulge the individual names on that roster. Comparing names for Berkeley, Michigan, and Northwestern might confirm the inclusion of untenured faculty as contributing to their lower rating in Professor Sag’s measure.
Again, despite these significant differences, Professor Sag reports that his ranking is at about a 95 percent correlation with ours. But that 5 percent of difference may well reflect a distortion by imposing a penalty on those law schools that happen to have a larger share of junior faculty at this point in time.
Professor Sag says that he has found an easier, quicker, and better way to rank scholarly impact. But while it is indeed easier and quicker, it is demonstrably flawed, producing results that cannot be defended, such as the attractive but unreasonable claim that my school St. Thomas should rank near the top ten.
Over the past decade with our Scholarly Impact Ranking, we have done the hard work and devoted the long hours necessary to carefully assemble law school rosters that include only tenured faculty with traditional scholarly expectations and then to counts citations carefully with appropriate sampling and adjustments. It isn’t quick and easy, but is it the right way to do it. And as long as I am a part of the project, I promise we’ll continue to do that hard work.
Second, although Professor Sag persists in using the term, there is nothing “exclusionary” about ranking the top third. Rather, refusing to rank further promotes values of fairness and accuracy. Even before we reach the bottom part of the top third ranking, law schools begin to cluster together with smaller and smaller differences between them, such that imposing an ordinal ranking becomes less and less fair and accurate.
Ranking all the way down to 193 is not inclusive but mistaken. Because of the shrinking differences, it is dubious to say that schools ranked near the 100 mark are meaningfully different in scholarly impact than those supposedly ranked much further down. Professor Sag says there are ways other than omitting them to account for those small differences. But the best way to do that would be to say, for example, after the top third, every other law school is tied at #70 or #75. While we could do that, it really is no different than not ranking at all — which is my point.
Moreover, it is an odd definition of inclusivity and fairness to impose a low ranking on a school without both a strong objective basis for doing so and with attention to the mission of the law school. I cannot imagine that a law school ranked in Professor Sag’s bottom 50 is delighted to be included, only then to be damned as a school with one of the worst scholarly impact ratings in the country. Indeed, I would not be surprised if most of the schools in Professor Sag’s bottom 100 would have been just as pleased to be omitted. The reality is that only about one third of the nearly 200 ABA-accredited law schools can be said to have a genuine mission to make a national impact on scholarship. To impose that national scholarly impact measure on all the other schools which have a different mission and niche is misguided.
A rough comparison may be found in ranking college football teams. Not only is the national ranking limited to Division I schools, but it is limited to the top 25. The unranked schools are not “excluded,” but rather simply don’t make a mark as competitive for a national ranking. No purpose would be served, and it would be simply unjust, to impose a ranking all the way down on every college football team in the country, or even all of the Division I schools.
Posted by: Greg Sisk | Oct 4, 2021 11:57:44 AM
Three points in response to Professor Sisk's thoughtful comments:
(1) To be clear, while I was obviously motivated to conduct this study by the exclusion of Loyola Chicago, correcting that oversight does nothing to address the exclusion of other schools deemed unworthy of study.
(2) Relatedly, while it's true that the ordinal differences between ranks may overstate the substantive differences at the tail end of the distribution, there are ways to deal with this without excluding the majority of ABA accredited law schools.
(3) I would like address the contention that my faculty lists are inaccurate. For that purpose, I would like to invite Professor Sisk to share his the underlying data for the schools he ranks 7, 23, 25, 28, 30, 35, 40, 54, 66, 67 (10 numbers I chose randomly using the excel randbetween function). I would be very interested to understand the differences between our approaches, particularly whether different faculty lists have a meaningful impact on the ordinal ranking of faculties.
Thank you for entering into this productive discussion.
-Matt
Posted by: Matthew Sag | Sep 29, 2021 6:52:52 PM
Instead of - or at least in addition to - trying to calculate the ACADEMIC impact of law school faculties, wouldn’t it be more important and meaningful to try to determine the impact of law professors on and in the real world of law, not just behind the walls of academe.
After all, many if not most law review articles are not even cited by other academics (the small groups which presumably has the time to read them), and few can be shown to have had any real significant impact in the real world of law.
Since each law review costs an estimated $100,000 in tuition revenue to produce, perhaps it would be much better to see which if any are truly worth this huge cost - or whether there are more valuable and productive uses of the time and money spent on churning them out.
As one law professor recently wrote, "Every year I write another article about the same thing, just like everyone else. It’s a drag, but the summer bonus makes it worth the effort,"
Posted by: LawProf John Banzhaf | Sep 29, 2021 3:23:13 PM
As the lead name on the so-called “Sisk” Scholarly Impact Ranking, I read Professor Matthew Sag’s paper with great interest. I found his paper to be a powerful endorsement of our triennial Scholarly Impact Ranking. For Professor Sag to use a different database, a different set of faculty at each school, and a different calculation method for scholarly impact —and yet to find a 95% correlation with the ranking results that we independently achieved (as Professor Sag notes in his blog post) — is rather remarkable. This should be grounds for celebrating the strong alignment between us and the confirmation yet again of the robust strength of citation-based rankings. And on top of that, Professor Sag ranks my own school, the University of St. Thomas in Minnesota, way up at #11, way above the #23 that our ranking produced.
Unfortunately, rather than this positive and unifying message, the theme of Professor Sag’s paper is that our Leiter-Sisk Scholarly Impact Ranking is exclusionary and unfair. Fortunately, the factual assertions that draw him to that conclusion are mostly inaccurate. He suggests, for example, that we have excluded such schools as Houston, DePaul, and Seton Hall, when we simply have not. Indeed, he says in a comment to his blog post that DePaul has never been included in our study. To the contrary, DePaul has always been included in our study and, in 2015, achieved the top-third ranking. Yes, it is true that Professor Sag’s own institution, Loyola-Chicago ,was not included in this year’s study. That’s a fair grievance. In fact, we have included Loyola-Chicago in the past, where it did not approach the top third ranking. But faculties change, and, based on Professor Sag’s findings, I agree that we should include Loyola-Chicago again. And I promise we will next time around. Yes, we are that open to inclusion.
Our approach to including law schools for the intensive phase of study has been open and transparent. We share the list of about 100 law schools publicly before we conduct the study through the associate deans’ listserv to which every accredited law school belongs. We invite law schools that are not on the list to conduct their own citation study and share it with us. And schools do every time. While most of those schools do not end up making it into the top third ranking, that does happen on occasion. And we welcome it. And lest there be any doubt, we do a full work-up of all of these schools, meaning that this year we fully vetted the faculty rosters and did a full citation count, including sampling, etc., of all 99 schools studied.
To be sure, there are variations between our rankings, even though the correlation is tight overall. The reason for those variations are likely to be found in (1) different databases (we use Westlaw and Professor Sag used HeinOnline), and (2) a different point of study (we carefully verify rosters of tenured faculty with traditional scholarly expectations and Professor Sag apparently simply accepted a HeinOnline designation of “doctrinal” teaching).
The differences — that is, the pluses and minuses — of Westlaw versus HeinOnline have been long debated and are spelled out in multiple publications, including the report of our most recent ranking. I am comfortable with our considered choice of Westlaw as the database. I do think the comparative advantages of Westlaw far outweigh the disadvantages, as we’ve openly explained. But I appreciate room for a difference of opinion and note again that the more refined HeinOnline ranking conducted by Paul Heald and Ted Sichelman found that our different approaches were very closely correlated.
My greater concern is with Professor Sag’s choice of the faculty to study. Preparing, vetting, and verifying the faculty rosters is one of the most time-consuming parts of our ranking study every three years. I preside over our work on identifying which faculty members at each law school have tenure, which have traditional scholarly expectations, and which are moving to other institutions. I then transparently share those preliminary rosters with the deans at each school, asking to be informed of possible errors and learn of recent changes. We insist on making the final choice, being consistent among all law schools.
But Professor Sag bypasses that entire painstaking stage. He apparently includes all faculty who designate teaching in a doctrinal course, which I think then means he includes not only tenured faculty, but untenured faculty and even those who are not on tenure-track at all. In addition, several schools have confirmed to us that their tenured faculty teaching in clinics have the same scholarly expectations, and so for those schools we include tenured clinical faculty. Not Professor Sag.
And Professor Sag doesn’t account for recently-announced lateral moves, which often is critical. Those lateral moves are a key part of the dynamic nature of Scholarly Impact Ranking.
Getting the faculty rosters right is hard work for us, but it makes all the difference.
Moreover, Professor Sag’s paper confirms our wisdom is not trying to rank all the way down for every ABA-accredited law school. While he imposes an ordinal ranking on schools from 1 to 193, I know from looking at the mean and median data that the differences among the schools after about the one-third point (after ranking about 69 to 70) are too small to justify separation of them through a misleading ordinal ranking. It just is not fair to rank further as the differences between the school’s scholarly impact shrinks to the minuscule.
But let me end by again accentuating the positive. Despite all of these differences, we find again that citation-based rankings tend to bolster one another. For that, I am thankful.
Posted by: Greg Sisk | Sep 28, 2021 8:43:51 PM
Prof. Sag just pointed out this post to me. There is an error right at the start: Houston and DePaul were both studied by Sisk & colleagues, and did not make the top 50.
Posted by: Brian Leiter | Sep 28, 2021 5:40:04 PM
Scott--HeinOnline's system works more like Google Scholar in how it counts citations.
Overall, it seems to me that HeinOnline and Westlaw both have advantages and disadvantages.
On one hand, HeinOnline will better pick up co-authored articles. Sisk's methodology of searching for names in Westlaw may miss some cases where an author isn't cited by name but instead replaced with an "et al" in the law review formatted citation. I believe Sisk may try to account for this, but I have to imagine it is hard to fully fix using his methodology. By contrast, HeinOnline links each author to their respective articles (regardless of how many co-authors there are) and then counts the number of times anyone cites that specific article (e.g. 99 Pa. L. Rev. 9999). This removes the whole "et al" problem faced by Sisk's methodology.
On the other hand, Westlaw may be slightly better at identifying a small number of citations to books or disciplinary journals outside of law reviews. Although both HeinOnline and Westlaw lack coverage of many interdisciplinary journals and books, Sisk's methodology of searching Westlaw for references to anyone's name may catch times when law review articles or treatises cite books or disciplinary articles (even if this books or journals are not in their database).
To me, I don't see any clear and obvious advantage to one using one database over another. However, I think Matt's point about Sisk unnecessarily excluding some schools with high levels of scholarly production is a valid one--and the ease of HeinOnline citation data means we don't need to exclude anyone for expediency purposes in the future.
Posted by: Prof | Sep 28, 2021 5:13:14 PM
On Westlaw vs. Google Scholar, Westlaw counts citing papers, and Google Scholar counts cited papers. E.g., if I publish a paper citing to three different works by Matt, Matt will get 3 Google Scholar citations but only 1 Westlaw citation. Not sure how Hein does it.
Posted by: Scott Dodson | Sep 28, 2021 4:20:25 PM
If the GS noise is uniform it should matter less for the purposes of rankings, and anyway the fact that GS simply treats all scholarly contributions alike might outweigh the costs of noise---if only as a comparator to HOL. The baked-in parochialism strikes me as a major con in a world where more faculty are doing excellent work outside of the broken world of law reviews. Anyway, HOL is clearly far superior to Westlaw in this regard and it is flabbergasting that Sisk has stuck with Westlaw despite its irrelevance for academic research.
An interesting follow-up would be to decompose the difference in rankings due to coverage differences vs. metric differences.
Posted by: anon | Sep 28, 2021 3:06:14 PM
To follow up, Sisk's criteria for getting to the long list totally opaque. Loyola has never been included, as far as I know. Same for DePaul, I believe.
Posted by: Matthew Sag | Sep 28, 2021 2:26:59 PM
Thanks for your comments.
Academics who compare their citation counts on Google Scholar to what can be extracted from Westlaw or HeinOnline will notice that the Google Scholar numbers are much higher. Partly this is because Google finds citations outside the coverage of Westlaw and HeinOnline, and partly this is because there is a lot of double counting in Google Scholar as a paper may be cited in multiple versions of the same working paper. GS is not consistent enough to use for this purpose yet. Hopefully one day it will be.
Posted by: Matthew Sag | Sep 28, 2021 2:24:51 PM
So, Sisk's note 51 says this: "The clustering together of schools with scores only slightly apart increased beyond where we ended
the ranking at #63 (with a total of 68 law faculties). For example, the law faculties at eight schools fell just short
of the ranking: Denver, Hawaii, Houston, Penn State, Rutgers, Tennessee, Texas A&M, and Washington"
That's 8 schools that Sisk supposedly under-included (but just barely by their account), but Matt has them at:
Denver: 67
Hawaii: 111
Houston: 52
Penn State: 50
Rutgers: 92
Tennessee: 82
Texas A&M: 67
Washington: 119 (but 75 in mean!)
So, my takeaway is that the process is slightly sensitive to whether you use Westlaw or Hein, and highly sensitive to whether you use median or some other measure. After all, using the Sisk Mean*2+median, Houston lands at 69, right where Sisk says they land. They are only underranked because of the method used to calculate. I will note, though, that Matt's method is a whole lot more transparent about these types of shifts, which seem to affect some schools more than others.
But then consider Loyola Chicago, which is absent in Sisk, but ranks 50 (!) using Sisk's mean*2+median method. They really were left out, and that's surprising to me.
Posted by: Michael Risch | Sep 28, 2021 1:16:25 PM
Hi Prof -
On reread of the blog post, I take back what I said. The post does imply that these schools are simply not counted. I will have to read the essay, because I'm pretty surprised that some of these schools are not included.
Posted by: Michael Risch | Sep 28, 2021 12:56:52 PM
Hi prof-
I don't think that's it. First, I find it very hard to believe that they didn't at least check to see if Houston and Loyola-Chicago were in the top 60. They check my school every year even though we are usually just short of the cut.
Furthermore, Matt's charts show over and under counting even among those schools in their ranking set, which implies that their ranks are somehow off. A presentation with concerns along the lines you are discussing would simply compare those omitted schools to the included ones to show how the omitted ones really have just as much. But that's not what I'm seeing in the charts here, which is why I'm wondering where the discrepancy is.
Posted by: Michael Risch | Sep 28, 2021 12:50:19 PM
(And btw both completely exclude _books_, which are a preferred mode of scholarship in some important subfields, like legal history and law & philosophy. Why reinvent the wheel?)
Posted by: anon | Sep 28, 2021 12:50:06 PM
Is there any justification for using HOL or Westlaw rather than Google Scholar? Certainly Westlaw, and maybe HOL, exclude increasingly important interdisciplinary journals widely read (and respected) by legal academics, or by people using legal knowledge. To name just one example--health law faculty frequently publish in the likes of NEJM and JAMA, making important contributions to their field, and these contributions are completely unaccounted for.
Posted by: anon | Sep 28, 2021 12:47:49 PM
Michael—not to speak for Matt, but I think the concern is that, because Sisk’s methodology is time intensive, he and his team have to make an initial decision about which schools to examine (and which schools to not examine). That’s not an indictment of the hard work he and his team do! But their methodology requires a lot of time to produce, meaning that they have to make some choices, like which schools to consider at all. And this introduces an arbitrary cut off that excludes from consideration many schools that, if evaluated, would have ranked even higher than those he included in his initial analysis. This results in a perception that some schools have lower citation counts than their peers when that might not be true. Those schools may have merely not been included in Sisk’s initial pool of schools he examined at the outset. Matt may correct me if I’m mistaken. But that seems like a reasonable critique to me. And I think this illustrates the usefulness of HeinOnline. Because they’ve done the work for us, there is no need to make any arbitrary cutoffs in the future.
Now, I can imagine legitimate critiques of whether HeinOnline or Westlaw produce more accurate citation counts given their database, how they account for co-authored articles, and more. But to me, Matt’s point about including all schools in a ranking system is pretty compelling.
Posted by: prof | Sep 28, 2021 11:19:27 AM
Matt - Thanks for doing this work. I guess I'm not understanding the source of the bias. If there's a .95 correlation, is the issue that the .05 leads to these differences? Or is it that if we count a 5 year basis? That is, what accounts for the over/under-ranking in your view?
On a related note, I thought you were going to go a different way, and list citations to articles written in the last x years (where 3< x <10), so that impact is measured more by who is writing now, not by who wrote 20-30 years ago.
Posted by: Michael Risch | Sep 28, 2021 11:02:29 AM
The comments to this entry are closed.