Next week, the Supreme Court will take up two cases challenging decades of precedent that have upheld the use of race and ethnicity as one factor among many that can be considered in the admissions process at colleges and universities.
Stacy Hawkins, vice dean and professor of law at Rutgers Law School and a senior faculty fellow at the Institute for the Study of Global Racial Justice, explains the history of affirmative action and what is at stake in these cases.
When did affirmative action start and how effective has it been?
Affirmative Action traditionally refers to programs initiated by President John F. Kennedy in 1961, peaking in the 1970s, that mandate “affirmative action” by employers and other private institutions receiving federal funds, such as colleges and universities, in favor of women and racial/ethnic minorities as a means of redressing past discrimination against these groups.
By contrast, lots of people also refer to current efforts, whether in the workplace or in educational settings, that use race, ethnicity, or gender to achieve diversity as “affirmative action.” These programs are often justified more by concerns for the instrumental benefits that accrue from diversity, such as enriching the learning or work environment with people from a variety of backgrounds, rather than from the remedial concerns that motivated “affirmative action” in the 1960s and 1970s.
Traditional affirmative action efforts have not been as widely studied as you might imagine, but the research that is available tends to show that at their inception in the 1960s and through their peak in the 1970s, these efforts did benefit Black men and women and white women by opening up jobs in the skilled trades as well as in professional and managerial workforces that had been previously closed to these groups.
Continue reading in Rutgers Today