| Bright Spot Summary, 2024–25 | |
| Bright spot schools meeting analytic criteria | 310 |
| Counties represented | 62 |
| Districts represented | 75 |
| Median improvement since 2022 peak | 19.2 pp |
| Median share of economically disadvantaged students | 59% |
| Schools with no QC flags (confirmed candidates) | 184 |
2025 AttendNC Bright Spots
Schools Outperforming Expectations on Chronic Absenteeism
About This Report
This report identifies North Carolina public schools that have made exceptional progress reducing chronic absenteeism. These “Bright Spot” schools are doing better than expected given their structural and socioeconomic context.
The goal is not to rank schools or assign credit or blame. The goal is to identify places where something appears to be working and make those schools visible to educators, policymakers, and researchers who want to understand what may be possible in similar contexts.
Data in this report are drawn from publicly available NC School Report Card data. We use a model-adjusted approach so that schools are compared more fairly, accounting for differences in poverty, enrollment, grade configuration, staffing, and prior absenteeism levels.
A full description of data sources and methodology is provided in the Technical Brief.
What Is Chronic Absenteeism?
A student is chronically absent when they miss 10% or more of the school year, roughly 18 or more days. Chronic absenteeism is associated with lower academic achievement, reduced likelihood of on-time graduation, and long-term economic consequences.
Rates spiked sharply across North Carolina and the country during and immediately after the COVID-19 pandemic, peaking in the 2021–22 school year. Recovery has been uneven. As of 2024–25, the statewide average remains well above pre-pandemic levels.
How We Identified Bright Spots
Identifying truly high-performing schools requires more than looking at raw absenteeism rates. A school serving predominantly high-poverty students in a rural county faces different structural challenges than one serving a more affluent suburban community. A fair comparison has to account for those differences.
We used a statistical model to estimate the absenteeism rate we would expect for each school, given:
- The share of economically disadvantaged students
- School enrollment size
- Grade configuration (elementary, middle, high)
- Whether the school had a high share of beginning teachers
- The school’s own absenteeism rate at its 2021–22 peak
- The district to which a school belongs
Qualification Criteria
Every bright spot candidate had to clear both of the following criteria:
Criterion 1 - Better than expected: The school’s observed absenteeism rate is meaningfully below what the model predicts for a school like it.
Criterion 2 - Genuine recovery: The school reduced its chronic absenteeism rate by at least 5 percentage points from its 2021–22 peak.
Key Findings
We identified 310 bright spot schools across 62 counties and 75 districts in North Carolina’s 2024–25 school year. The median school in this group reduced its chronic absenteeism rate by 19.2 percentage points since its 2021–22 peak. The majority serve student populations where more than half of students are economically disadvantaged.
Of these 310 schools, 184 had no quality-control flags for manual review. The remaining schools are still included because they met the analytic criteria, but warrant additional validation due to factors such as small denominators or non-standard school types (alternative, magnet, etc.). In other words, the full list identifies likely bright spots, while the no-flag group represents the most straightforward cases for immediate follow-up.
Where Are They?
Bright spots are distributed widely across North Carolina, from the mountains to the coast. No single region dominates, and no single district accounts for an outsize share of schools.
How Much Did They Improve?
Every bright spot school improved since its 2021–22 peak. The chart below shows each school’s peak rate (x-axis) against its 2024–25 rate (y-axis). All schools fall below the diagonal line, meaning every one of them has made real progress. The vertical drop lines show the size of that improvement for each school, tracing how far its current rate sits below its pandemic-era peak. The longer the line, the greater the improvement.
Bright spot schools didn’t just improve in absolute terms, they outperformed what the model predicted given their structural context. The chart below plots each school’s observed 2025 absenteeism rate against the rate the model expected based on poverty, enrollment, staffing, grade band, and pandemic-era peak. Schools below the diagonal are doing better than their structural profile would suggest; bright spot candidates are highlighted.
Why Context Matters
Chronic absenteeism is a multi-causal challenge, not a single student behavior problem. A strong attendance strategy has to account for barriers such as poverty, instability, transportation, staffing, grade span differences, and the uneven recovery that followed the pandemic. That is why this updated analysis matters. It shifts the conversation away from simple comparison and toward a fairer question: which schools are doing better than we would expect, given the realities they face?
This is also why the analysis is useful for continuous improvement. It does not claim to prove which program or strategy caused a school’s improvement. It helps narrow the field to schools worth studying more closely, especially schools serving students and communities that are too often overlooked when improvement is measured only in absolute terms.
Represenation of High-Poverty Schools
A school serving predominantly high-poverty students in a rural county faces different structural challenges than one serving a more affluent suburban community. The updated, context-adjusted bright spot model is designed to account for those differences by identifying schools that are outperforming relative to expectations, not only schools with the largest raw declines in chronic absenteeism.
The chart below shows why that matters. Under the older 50% reduction rule, bright spots were more concentrated among lower-poverty schools. By contrast, the current context-adjusted model identifies bright spot schools with a higher median share of economically disadvantaged students, helping surface schools whose improvement might otherwise be missed in a blunt statewide comparison.
The difference is substantial. The median share of economically disadvantaged students among current bright spot schools is 59%, compared with 47% under the prior 50% reduction rule. In addition, 13% of bright spots identified by the current model are high-poverty schools, defined as schools where at least 75% of students are economically disadvantaged, compared with less than 1% under the old definition. These shifts suggest that the updated model is better positioned to recognize meaningful attendance improvement in high-poverty schools and communities.
Geographic Spread and District-Level Concentration
Bright spots are distributed across North Carolina, from the mountains to the coast. No single region dominates, and no single district accounts for an outsize share of identified schools. That geographic spread matters because it suggests attendance recovery is not confined to the most affluent or best-resourced parts of the state.
As shown in the map below, identified bright spot schools span 62 of North Carolina’s 100 counties and include rural, suburban, and urban communities. Among traditional public schools, bright spots are distributed across 66 of North Carolina’s 115 school districts. The map shows some clustering in districts such as Davidson, Union, Burke, Catawba, and Henderson, but bright spots are not concentrated in any single county or region.
The district-level map focuses on traditional public schools and shows the share of eligible schools in each district that met both bright-spot criteria. Darker shading indicates a higher share of eligible schools identified as bright spots. Districts in gray had no eligible traditional schools meeting both criteria.
The bar chart below shifts from geography to district-level concentration, showing each district’s bright spots as a share of its own eligible schools. Davidson County Schools stands out, with 82% of eligible schools qualifying as bright spots (28 of 34), followed by Burke County Schools at 68% (17 of 25). Several smaller districts also show high within-district shares, including Yancey and Ashe at 67%.
These high within-district shares may point to broader local conditions, routines, or supports that are helping multiple schools improve, rather than isolated school-level exceptions.
At the same time, the full set of 310 schools is distributed across 75 districts, with many districts contributing only one or two schools. Together, the map and bar chart suggest both statewide reach and meaningful local concentration in some communities.
School Spotlights
The following schools illustrate three bright spots from across North Carolina: one elementary, one middle, and one high school. They are presented as examples of unusually strong attendance recovery, not as proof of any single strategy. The administrative data can identify schools that are outperforming expectations, but they cannot by themselves explain why.
Elementary: Mountain View Elementary School
Mountain View Elementary School in Burke County serves 704 students, with 66% economically disadvantaged above the statewide median. Its 2024–25 chronic absenteeism rate of 13.9% is 17.9 percentage points below what the model predicts for a school with its profile. Since its 2021–22 peak of 39.4%, it has reduced chronic absenteeism by 25.5 percentage points.
Mountain View Elementary School
Burke County Schools · Burke County · Elementary
| Metric | Value |
|---|---|
| Enrollment | 704 students |
| Econ. Disadvantaged | 66% |
| Title I | Yes |
| Peak Rate (2021–22) | 39.4% |
| Current Rate (2024–25) | 13.9% |
| Model-Expected Rate | 31.8% |
| Improvement | 25.5 pp ▼ |
| % Reduction from Peak | 65% |
Middle: Mac Williams Middle
Mac Williams Middle in Cumberland County serves 1,188 students. Its 2024–25 rate of 19.4% is 23.2 percentage points below model expectation, one of the largest gaps among all middle schools in the candidate pool. Improvement since peak: 28.8 percentage points.
Mac Williams Middle
Cumberland County Schools · Cumberland County · Middle
| Metric | Value |
|---|---|
| Enrollment | 1,188 students |
| Econ. Disadvantaged | 60% |
| Title I | Yes |
| Peak Rate (2021–22) | 48.2% |
| Current Rate (2024–25) | 19.4% |
| Model-Expected Rate | 42.6% |
| Improvement | 28.8 pp ▼ |
| % Reduction from Peak | 60% |
High: Maiden High
Maiden High in Catawba County serves 880 students. It reduced chronic absenteeism by 41.6 percentage points from its 2021–22 peak, finishing 2024–25 at 13.1%, well below the 35.7% the model predicts for a high school with its characteristics.
Maiden High
Catawba County Schools · Catawba County · High
| Metric | Value |
|---|---|
| Enrollment | 880 students |
| Econ. Disadvantaged | 48% |
| Title I | No |
| Peak Rate (2021–22) | 54.6% |
| Current Rate (2024–25) | 13.1% |
| Model-Expected Rate | 35.7% |
| Improvement | 41.6 pp ▼ |
| % Reduction from Peak | 76% |
Limitations
The analysis in this report is intended to surface promising signals, not to make causal claims. A school appearing as a bright spot means its attendance recovery looks unusually strong given the factors included in the model, but additional review is still needed to understand what drove that improvement and whether it reflects durable, transferable practice.
Additional limitations worth considering:
Measurement quality in attendance data. Even when public reporting is the best available source, attendance coding practices can vary across schools and districts. Differences in how absences are recorded, corrected, or submitted could affect comparisons.
Omitted-variable bias. The model adjusts for several important structural factors, but it does not account for every relevant influence on attendance, such as transportation changes, leadership transitions, local community partnerships, weather disruptions, school climate shifts, or short-term interventions not captured in the data.
Regression to the mean Because this approach uses improvement from a high-pandemic peak, some schools may appear to improve in part because exceptionally high rates moved closer to typical levels over time, not only because of unusually effective practices.
Within-school variation. A school-level rate can improve even as disparities across student subgroups widen. For that reason, bright spot identification should be followed by subgroup review and local validation.
Manual review flags. Of the 310 schools that met the analytic criteria, 124 carry one or more QC flags such as small denominators or non-standard school types. These schools may still be legitimate bright spots, but they warrant additional review before being used as exemplars.
Peak year is fixed at 2021–22 statewide. In a small number of cases a school’s personal worst year may differ; the
p_peakcovariate addresses this partially by using each school’s own 2021–22 rate rather than a statewide composite.
Finally, some schools may reflect durable improvement while others may reflect shorter-term recovery or volatility. The report is best understood as a first-pass identification tool, not a final determination of which schools have sustained, replicable attendance practices. This is why follow-up validation matters.
Next Steps
This report is intended to identify promising signals for follow-up learning, not to serve as the final version of the Bright Spots methodology. The next phase of the work has two purposes: to learn more systematically from the schools identified in the 2025 analysis and to strengthen the methodology used for the 2026 cycle.
Bright Spots Learning
The 2025 analysis completes the first two phases of a Positive Deviance approach: defining the problem and identifying schools that appear to be outperforming similar peers. The next step is to move from statistical signal to deeper learning about what may be working in those settings, using a context-sensitive lens informed by Carolina Demography’s school typologies.
Case validation: Selected schools will be reviewed more closely to confirm that unusually strong recovery reflects a meaningful signal rather than data anomalies, short-term volatility, or other factors that warrant caution. This review will include subgroup patterns and other checks that help distinguish strong candidates for learning from cases that require more caution.
Practice discovery: Validated schools will then become the focus of structured positive deviance inquiry aimed at identifying the routines, supports, and enabling conditions that may be contributing to stronger-than-expected recovery. Guided by school typologies developed with Carolina Demography, this phase will support more peer learning by focusing on schools operating in comparable contexts.
Translation to action: Insights from validated bright-spot schools will then be translated into practical supports for similar schools through co-designed attendance action plans and related implementation tools. The aim is to move promising local practice from discovery toward adaptation and use in comparable settings.
Methodological Refinements
The 2024-25 analysis provides a strong first pass, but it also highlights several areas where the next cycle can improve measurement, model specification, and interpretation. The analysis for the 2025-26 school year will build on this foundation with a more refined and better-validated approach.
Improved measurement and QC review: Future work will validate attendance patterns against additional administrative sources where possible and review schools with potential data anomalies more systematically. This will help distinguish true improvement from reporting artifacts or unusual year-to-year volatility.
Expanded model inputs: The 2026 analysis will explore whether additional school and community covariates improve model fit and reduce omitted-variable concerns, including broader staffing, mobility, performance, discipline, and contextual indicators where data quality is sufficient.
Reduced dependence on a single recovery frame: Because the 2025 analysis is anchored to improvement from the 2021–22 peak, the 2026 cycle will test alternative specifications, including multi-year trend structures and approaches less sensitive to regression-to-the-mean concerns.
Greater attention to within-school variation: Future analyses will place greater emphasis on subgroup patterns so that strong overall recovery can be interpreted alongside within-school variation rather than only at the aggregate school level.
Technical Brief
This appendix is intended for researchers and analysts who want to understand the methodology behind the bright spot identification process. The main report answers a practical question: which schools appear to be recovering from chronic absenteeism more strongly than expected given their context? The appendix explains how that signal was estimated and why this model was selected over simpler alternatives.
Data Sources
All data are drawn from NC School Report Card public data files, available at the NC DPI School Report Card Resources for Researchers page. The following files were used:
| Dataset | Years Used | Description |
|---|---|---|
| Chronic Absenteeism | 2018–2025 | School-level counts and rates |
| School Information | 2024–25 | School type, grade span, locale |
| School Size | 2024–25 | Enrollment denominator |
| School Demographics | 2024–25 | Poverty and race/ethnicity |
| Teacher Experience | 2024–25 | Beginning teacher share |
| Economically Disadvantaged Students | 2024–25 | EDS counts by subgroup |
| NBPTS Certification | 2024–25 | National Board certified teachers |
| Teacher Effectiveness | 2024–25 | Overall effectiveness ratings |
| EDDIE School Crosswalk | 2024–25 | Agency code → PSU linkage |
| IES Geocoordinates | 2024–25 | School latitude/longitude |
Eligibility Criteria
Schools were included in the modeling pool only if they met three criteria designed to improve comparability and reduce obvious distortions in the results:
- Minimum enrollment (100 students): 83 schools fell below the 100-student enrollment minimum and were excluded from the modeling pool. Very small schools can produce unstable rates from year to year.
- Not a virtual school: 44 virtual school codes were excluded because virtual schools operate under attendance conditions that are not directly comparable to brick-and-mortar schools. They were also too few in number to support a separate model, making fair model-based comparison difficult.
- No enrollment spike: 23 schools with > 20% enrollment growth from their peak year were excluded as potential structural anomalies
Final modeling pool: 2,486 schools (2,304 traditional; 182 charter). Note there is some overlap between categories (a virtual school could also be below 100 students).
Model Selection
Three model families were evaluated. The selected model was a beta-binomial mixed-effects model fit with district clustering and an explicit overdispersion term using the glmmTMB package in R. Simpler alternatives were rejected because they either failed to account adequately for district clustering or produced residual variation that remained far too large. By contrast, the selected model generated much better-calibrated residuals, making it the strongest basis for identifying schools performing meaningfully better than expected. Models were run separately for traditional and charter sectors.
Models evaluated
1. Quasibinomial GLM (rejected)
A pooled glm() with quasibinomial family. Overdispersion is corrected externally by inflating standard errors using a Pearson-based phi estimate but it is not part of the likelihood. No district clustering. Pearson residual SD ≈ 4.9 (target: 1.0). No AIC available.
2. Binomial GLMER — lme4 (rejected)
A glmer() with binomial family and PSU random intercept. Adds district clustering (correct) but assumes binomial variance — no overdispersion modeled beyond what clustering absorbs. The model compensates by pushing overdispersion into the random effect, distorting the district-level adjustment. Pearson residual SD ≈ 3.5 (target: 1.0).
3. Beta-binomial — glmmTMB (selected)
Estimates overdispersion (phi) within the likelihood. The quasi-standardized residual is derived from a self-consistent formula using rho = 1/(1+phi). Residual SD ≈ 1.21 — well-calibrated. Models run separately for traditional and charter sectors.
AIC comparison
AIC (Akaike Information Criterion) measures relative model fit, penalizing for the number of parameters. Lower values indicate a better-fitting model. The quasibinomial GLM is excluded because quasi-likelihood estimation does not produce a proper likelihood and therefore has no AIC. The BB models are highlighted in bold.
| Model Comparison — AIC and Log-Likelihood | ||||||
| Model | Family | Sector | n | AIC1 | log-Lik | ϕ (dispersion) |
|---|---|---|---|---|---|---|
| BB Traditional (glmmTMB) | Beta-binomial | Traditional | 2304 | 22,151 | -11,062 | 62.6 |
| BB Charter (glmmTMB) | Beta-binomial | Charter | 182 | 1,916 | -946 | 26.7 |
| GLMER Pooled (lme4, binomial) | Binomial | Pooled | 2522 | 49,455 | -24,716 | — |
| GLMER Traditional (lme4, binomial) | Binomial | Traditional | 2329 | 46,407 | -23,192 | — |
| 1 Quasibinomial GLM excluded — quasi-likelihood has no AIC. | ||||||
The beta-binomial traditional model outperforms the binomial GLMER traditional model by approximately 24,000 AIC units, a significant model improvement.
Variable Selection
Predictors were selected on three grounds: (1) theoretical relevance as structural drivers of absenteeism, (2) data availability across the full modeling pool, and (3) incremental AIC improvement after accounting for collinearity.
Included predictors
| Predictor | Description | Rationale |
|---|---|---|
p_peak |
School’s own 2021–22 absenteeism rate | Most important predictor (ΔAIC = −870, traditional). Anchors each school’s structural baseline; without it, residuals conflate “high-poverty schools haven’t recovered” with genuine outperformance |
pct_eds |
Share of economically disadvantaged students | Core structural driver (poverty → instability → absence). Remains significant after p_peak adjustment (r = 0.58 between them; pct_eds coefficient drops ~47% after adding p_peak — correct adjustment, not collinearity) |
log(enrollment) |
Log-transformed total enrollment | Enrollment is right-skewed with a long tail of large schools. The log transform corrects for this and captures diminishing returns, each doubling of enrollment has the same modeled effect regardless of starting size. |
grade_band |
Elementary / Middle / High / combined | Chronic absenteeism rates differ systematically by grade level; middle and high schools are structurally higher than elementary |
pct_beg_teachers |
Share of beginning-year teachers | Teacher instability predicts student disengagement. Included in traditional model only, data coverage is insufficient for the small charter pool |
school_type_2 |
Regular vs. non-regular school type | Captures structural differences between magnet, alternative, and regular schools; low cell counts in non-regular categories informed the manual review flag |
(1 | psu_name) |
PSU/district random intercept | Traditional schools are nested within districts; without this, residuals inflate apparent outperformance for schools in high-absenteeism districts. Dropped for charters, each charter is its own LEA in NC |
Variables considered but excluded
| Variable | Reason excluded |
|---|---|
title_i |
Binary overlap with pct_eds (Title I schools are by definition high-poverty). Challenger model confirmed redundancy: ΔAIC = −1.5, ΔBIC = −7.0 favouring exclusion; z = 0.71, p = 0.48 in traditional model |
pct_prov_teachers |
Correlated with pct_beg_teachers (r ≈ 0.61); no incremental AIC improvement; multicollinearity with retained predictor |
nbpts_pct |
>40% missing across the modeling pool; imputing median would introduce systematic bias |
Teacher effectiveness ratings (ni_pct, eff_pct, he_pct) |
Same missingness problem as NBPTS; year-to-year instability adds noise rather than signal |
| District demographic composition | District-level racial/gender composition largely captured by school-level pct_eds; no AIC improvement; multicollinearity |
pct_prov_teachers (charter) |
Same as above plus sparse reporting for small charters |
Model Specification
Two separate beta-binomial models were fit: one for traditional public schools, one for charter schools.
Traditional schools (n = 2,304):
\[\begin{align} \text{logit}(p_i) = \; & \beta_0 + \beta_1 \cdot \text{pct\_eds}_i + \beta_2 \cdot \log(\text{enrollment}_i) \\ & + \beta_3 \cdot \text{grade\_band}_i + \beta_4 \cdot \text{school\_type\_2}_i \\ & + \beta_5 \cdot \text{pct\_beg\_teachers}_i + \beta_6 \cdot \text{p\_peak}_i + u_{j[i]} \end{align}\]
where \(u_j \sim N(0, \sigma^2_u)\) is a PSU (district) random intercept and \(p_i\) follows a beta-binomial distribution with dispersion \(\phi = 62.6\).
Charter schools (n = 182):
Same fixed effects excluding pct_beg_teachers and the PSU random intercept (each charter is its own LEA in NC; the random effect is not supported with one school per group). Dispersion \(\phi = 26.8\).
The p_peak covariate (each school’s own 2021–22 absenteeism rate) was a critical addition: it improved AIC by ~870 units in the traditional model and ensures that schools are compared to a baseline that reflects their own pandemic-era context.
Candidate Selection Gates
After modeling, bright spot schools were identified using two simultaneous criteria:
| Gate | Criterion | Rationale |
|---|---|---|
| 1 | Quasi-standardized residual ≤ −1.25 | School performs > 1.25 SDs below model expectation |
| 2 | Improvement ≥ 5 pp from 2021–22 peak | Confirms genuine recovery, not an artifact |
310 schools met both criteria. Of those, 290 had no quality-control flags and are confirmed Tier 1 schools; 20 are flagged for manual review before public reporting.
The chart below illustrates how the two gates work together. Each point is a school from the full eligible pool; bright spot schools must clear both thresholds simultaneously, the vertical line (residual ≤ −1.25) and the horizontal line (improvement ≥ 5 pp).
Residual Analysis
A residual is the difference between what a school actually achieved and what the model predicted it would achieve given its structural characteristics. In a well-specified model, residuals should be centered at zero (meaning the model is neither systematically optimistic nor pessimistic) and scaled so that a one-unit change corresponds to one standard deviation of unexplained variation. The two checks below confirm both properties hold for the beta-binomial model used here. This matters because the bright spot threshold (quasi-standardized residual ≤ −1.25) is only meaningful if the residuals are properly scaled; a miscalibrated model would make the threshold arbitrary and incomparable across schools.
Residual calibration
A well-calibrated model produces standardized residuals with a standard deviation close to 1.0. Values much larger than 1.0 indicate the model’s assumed variance is too small relative to the data, meaning any threshold applied to those residuals would be arbitrary and not comparable across schools. The BB model’s quasi-standardized residual SD of ≈ 1.21 confirms the threshold of −1.25 means what it says: a school performing more than 1.25 standard deviations better than its structural prediction.
| Residual Calibration by Model | |||
| Model | Residual Type | Residual SD1 | Target |
|---|---|---|---|
| BB sector-split (glmmTMB) | Quasi-std (rho from phi) | 1.209 | 1 |
| GLMER pooled (lme4) | Pearson | 3.490 | 1 |
| GLMER traditional (lme4) | Pearson | 3.580 | 1 |
| QB GLM pooled | Pearson | 4.910 | 1 |
| 1 Predecessor SDs are fixed values from the model development session. BB SD computed from schools_2025_sector_results. | |||
Residual distribution
The density plot below shows the full distribution of quasi-standardized residuals for the 2,486 schools in the modeling pool, split by sector. A properly calibrated model produces a distribution centered at zero, meaning the model neither systematically over- nor under-predicts absenteeism for any group of schools. The dashed vertical line marks the −1.25 bright spot threshold; schools to the left of this line are bright spots.
Report generated May 03, 2026 · Analysis code available upon request · Data: NC DPI School Report Card