Compare commits

...

10 Commits

Author SHA1 Message Date
azalea 7d87f87938 [U] Revise README for installation and demo info
Build executables / Windows x86_64 (push) Has been cancelled
Build executables / macOS universal (push) Has been cancelled
Build executables / Linux arm64 musl static (push) Has been cancelled
Build executables / Linux x86_64 musl static (push) Has been cancelled
Updated installation instructions and demo generation details in README.
2026-05-10 12:24:39 -04:00
azalea 7f780395f2 [+] Demo 2026-05-10 16:18:35 +00:00
azalea 777f071c7b [F] Fix default branch 2026-05-10 15:23:50 +00:00
azalea 464eb90c10 [F] Name validation 2026-05-10 14:25:28 +00:00
azalea f5b8938e4e [O] Fast path for serve 2026-05-10 13:43:32 +00:00
azalea 9141811f0f [U] Update logo size in README
Increased logo size from 50% to 70% in README.
2026-05-10 21:14:00 +08:00
azalea 539858b151 [O] Increase export res 2026-05-10 13:12:07 +00:00
azalea e39d628852 [+] Logo 2026-05-10 13:08:18 +00:00
azalea 13f2118267 [O] Explicit delete missing question, local backups 2026-05-10 13:07:15 +00:00
azalea d9726c4235 [O] Wording 2026-05-10 11:07:08 +00:00
18 changed files with 1706 additions and 108 deletions
+41 -8
View File
@@ -1,3 +1,7 @@
<p align="center">
<img src="./docs/refray.png" alt="refray logo" width="70%"/>
</p>
# refray
A tool to keep your repos in sync across all git platforms, while being able to work from everywhere all at once.
@@ -8,17 +12,46 @@ Created becasue github is so unusable and [unreliable](https://red-squares.cian.
- **read-write mirrors**: Make changes from any provider, and the changes will sync to the others
- **webhook support**: Sync right after push, reduce potential divergence window
- **conflict handling**: Rebase or open pull requests when two platforms diverge
- **tracks deletions**: Delete branches/repos across platforms when they are deleted from one platform
- **tracks deletions**: Branches/repo deletions sync across platforms (with backup)
- **selective sync**: Sync subset of repos by regex white/black list, or by private/public visibility
- **multithreaded**: Process multiple repos simultaneously!
Supported platforms: GitHub, GitLab, Gitea, Forgejo
> [!NOTE]
> Meow
> My cat made this codebase, meow
![demo](./docs/demo.webp)
<!--
The demo was rendered from an asciinema cast with capped idle pauses, Sarasa Mono SC, a One Half Dark palette with lighter dark-gray ANSI slots, and a larger font:
```sh
agg --idle-time-limit 1 \
--theme '282C34,DCDFE4,5C6370,E06C75,98C379,E5C07B,61AFEF,C678DD,56B6C2,DCDFE4,7F848E,E06C75,98C379,E5C07B,61AFEF,C678DD,56B6C2,DCDFE4' \
--text-font-family 'Sarasa Mono SC' \
--font-size 24 \
--cols 160 \
--rows 42 \
../out.cast \
demo.gif
ffmpeg -i demo.gif \
-loop 0 \
-c:v libwebp_anim \
-lossless 1 \
-compression_level 6 \
-q:v 100 \
docs/demo.webp
```
--->
## Install
### Option 1. Install from source
### Option 1. Install with Cargo
1. Install rust cargo if you don't have it: https://rustup.rs
2. `cargo install refray`
@@ -173,6 +206,10 @@ To move installed hooks to a new public URL, use `webhook update`. It removes ho
refray webhook update https://new.example.com/webhook
```
## Issues and Pull Requests
Issues and pull requests are not mirrored.
## Sync Semantics
Each mirror group is treated as a set of equivalent namespaces. Repositories are matched by repository name across all endpoints.
@@ -203,7 +240,7 @@ Conflict resolution strategies are configured per mirror group:
When a previously opened conflict pull request is merged, the next sync sees the merged branch as the winning tip, pushes it to the other endpoints, and closes stale `refray/conflicts/...` pull requests for that branch.
Repository and branch deletion are propagated only when it is safe to infer intent. If a repository existed on every endpoint in the previous successful sync, then disappears from one endpoint while the remaining endpoints still have the previous synced refs, `refray` deletes it from the remaining endpoints instead of recreating it. If the repository was deleted everywhere, `refray` removes its saved sync state. If the repository was deleted on one endpoint but changed elsewhere, it is treated as a conflict and skipped.
Repository and branch deletion are propagated only when it is safe to infer intent, and `refray` writes local backup refs and bundle files under the work-dir `backups/` directory before propagating those deletions. If a repository existed on every endpoint in the previous successful sync, then disappears from one endpoint while the remaining endpoints still have the previous synced refs, `refray` deletes it from the remaining endpoints instead of recreating it when `delete_missing = true`. If `delete_missing = false`, that missing repository is not treated as a deletion and normal missing-repository handling applies. If the repository was deleted everywhere, `refray` removes its saved sync state after creating a local backup from the mirror cache. If the repository was deleted on one endpoint but changed elsewhere, it is treated as a conflict and skipped.
Branch deletion follows the same rule at branch scope: if a branch existed on every endpoint in the previous successful sync, then disappears from one endpoint while the remaining endpoints still have the previous tip, `refray` deletes it from the remaining endpoints instead of recreating it. If the branch was deleted on one endpoint but changed elsewhere, it is treated as a conflict and skipped.
@@ -240,7 +277,3 @@ REFRAY_E2E_ALLOW_DESTRUCTIVE=1 \
```
By default cleanup only deletes repositories named `refray-e2e-*`. To start by deleting every owned repository visible to the configured accounts, set `REFRAY_E2E_CLEAR_ALL_REPOS=DELETE_ALL_OWNED_REPOS`. Provider skips (`REFRAY_E2E_SKIP_GITHUB`, `REFRAY_E2E_SKIP_GITLAB`, `REFRAY_E2E_SKIP_GITEA`, `REFRAY_E2E_SKIP_FORGEJO`) and `REFRAY_E2E_ALLOW_PARTIAL=1` are available for local debugging, but the full support check should run with all four providers.
## Issues and Pull Requests
Issues and pull requests are not mirrored.
BIN
View File
Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

BIN
View File
Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 433 KiB

+2
View File
@@ -62,6 +62,8 @@ pub struct MirrorConfig {
pub repo_blacklist: Option<String>,
#[serde(default = "default_true")]
pub create_missing: bool,
#[serde(default = "default_true")]
pub delete_missing: bool,
#[serde(default)]
pub visibility: Visibility,
#[serde(default)]
+79
View File
@@ -52,6 +52,13 @@ pub struct BranchUpdate {
pub force: bool,
}
#[derive(Clone, Debug)]
pub struct RefBackup {
pub refname: String,
pub sha: String,
pub description: String,
}
#[derive(Clone, Debug)]
pub struct BranchRebaseDecision {
pub branch: String,
@@ -420,6 +427,63 @@ impl GitMirror {
Ok(())
}
pub fn backup_refs(&self, backups: &[RefBackup]) -> Result<Vec<String>> {
let mut refs = Vec::new();
for backup in backups {
crate::logln!(
" {} {}",
style("backup").cyan().bold(),
style(&backup.description).dim()
);
self.run(["update-ref", &backup.refname, &backup.sha])?;
refs.push(backup.refname.clone());
}
Ok(refs)
}
pub fn create_bundle(&self, path: &Path, refs: &[String]) -> Result<bool> {
if refs.is_empty() {
return Ok(false);
}
if self.dry_run {
crate::logln!(
" {} git bundle create {} {}",
style("dry-run").yellow().bold(),
style(path.display()).dim(),
style(refs.join(" ")).dim()
);
return Ok(false);
}
if let Some(parent) = path.parent() {
fs::create_dir_all(parent)
.with_context(|| format!("failed to create {}", parent.display()))?;
}
let output = self
.command()
.arg("bundle")
.arg("create")
.arg(path)
.args(refs)
.output()
.with_context(|| "failed to run git bundle create")?;
if !output.status.success() {
let stdout = self
.redactor
.redact(&String::from_utf8_lossy(&output.stdout));
let stderr = self
.redactor
.redact(&String::from_utf8_lossy(&output.stderr));
return Err(GitCommandError::new("git bundle create", stdout, stderr).into());
}
crate::logln!(
" {} {}",
style("backup bundle").cyan().bold(),
style(path.display()).dim()
);
Ok(true)
}
fn push_branch_update(&self, remote: &RemoteSpec, update: &BranchUpdate) -> Result<()> {
let refspec = if update.force {
format!("+{}:refs/heads/{}", update.sha, update.branch)
@@ -752,6 +816,13 @@ pub fn is_disabled_repository_error(error: &anyhow::Error) -> bool {
.any(|error| is_disabled_repository_stderr(error.stderr()))
}
pub fn is_missing_repository_error(error: &anyhow::Error) -> bool {
error
.chain()
.filter_map(|cause| cause.downcast_ref::<GitCommandError>())
.any(|error| is_missing_repository_stderr(error.stderr()))
}
fn missing_remotes(all_remote_names: &[String], source_remotes: &[String]) -> Vec<String> {
all_remote_names
.iter()
@@ -768,6 +839,14 @@ fn is_disabled_repository_stderr(stderr: &str) -> bool {
|| stderr.contains("dmca takedown")
}
fn is_missing_repository_stderr(stderr: &str) -> bool {
let stderr = stderr.to_ascii_lowercase();
(stderr.contains("repository") && stderr.contains("not found"))
|| stderr.contains("project you were looking for could not be found")
|| stderr.contains("does not appear to be a git repository")
|| stderr.contains("the requested url returned error: 404")
}
impl Redactor {
pub fn new(secrets: Vec<String>) -> Self {
let secrets = secrets
+55 -2
View File
@@ -115,6 +115,9 @@ fn add_sync_group_styled(config: &mut Config, theme: &ColorfulTheme) -> Result<(
let endpoints = prompt_sync_group_endpoints_styled(config, theme, &[])?;
let sync_visibility = prompt_sync_visibility_styled(theme, None)?;
let repo_filters = prompt_repo_filters_styled(theme, None)?;
print_deletion_backup_notice_styled();
let create_missing = prompt_create_missing_styled(theme, None)?;
let delete_missing = prompt_delete_missing_styled(theme, None)?;
let conflict_resolution = prompt_conflict_resolution_styled(theme, None)?;
config.upsert_mirror(MirrorConfig {
name: next_mirror_name(config),
@@ -122,7 +125,8 @@ fn add_sync_group_styled(config: &mut Config, theme: &ColorfulTheme) -> Result<(
sync_visibility,
repo_whitelist: repo_filters.whitelist,
repo_blacklist: repo_filters.blacklist,
create_missing: true,
create_missing,
delete_missing,
visibility: Visibility::Private,
conflict_resolution,
});
@@ -447,6 +451,8 @@ fn edit_sync_group_styled(config: &mut Config, theme: &ColorfulTheme) -> Result<
let existing_sync_visibility = config.mirrors[index].sync_visibility.clone();
let existing_repo_whitelist = config.mirrors[index].repo_whitelist.clone();
let existing_repo_blacklist = config.mirrors[index].repo_blacklist.clone();
let existing_create_missing = config.mirrors[index].create_missing;
let existing_delete_missing = config.mirrors[index].delete_missing;
let existing_conflict_resolution = config.mirrors[index].conflict_resolution.clone();
let endpoints = prompt_sync_group_endpoints_styled(config, theme, &existing)?;
let sync_visibility = prompt_sync_visibility_styled(theme, Some(&existing_sync_visibility))?;
@@ -455,12 +461,17 @@ fn edit_sync_group_styled(config: &mut Config, theme: &ColorfulTheme) -> Result<
blacklist: existing_repo_blacklist,
};
let repo_filters = prompt_repo_filters_styled(theme, Some(&existing_repo_filters))?;
print_deletion_backup_notice_styled();
let create_missing = prompt_create_missing_styled(theme, Some(existing_create_missing))?;
let delete_missing = prompt_delete_missing_styled(theme, Some(existing_delete_missing))?;
let conflict_resolution =
prompt_conflict_resolution_styled(theme, Some(&existing_conflict_resolution))?;
config.mirrors[index].endpoints = endpoints;
config.mirrors[index].sync_visibility = sync_visibility;
config.mirrors[index].repo_whitelist = repo_filters.whitelist;
config.mirrors[index].repo_blacklist = repo_filters.blacklist;
config.mirrors[index].create_missing = create_missing;
config.mirrors[index].delete_missing = delete_missing;
config.mirrors[index].conflict_resolution = conflict_resolution;
prompt_webhook_setup_styled(config, theme)?;
println!(
@@ -817,6 +828,31 @@ fn prompt_repo_pattern_styled(
Ok(parse_repo_pattern(&value))
}
fn print_deletion_backup_notice_styled() {
println!();
println!(
"{} {}",
style("Deletion backups").cyan().bold(),
style("refray keeps a local backup before propagating repository or branch deletes").dim()
);
}
fn prompt_create_missing_styled(theme: &ColorfulTheme, existing: Option<bool>) -> Result<bool> {
Confirm::with_theme(theme)
.with_prompt("Create repositories that are missing from an endpoint?")
.default(existing.unwrap_or(true))
.interact()
.map_err(Into::into)
}
fn prompt_delete_missing_styled(theme: &ColorfulTheme, existing: Option<bool>) -> Result<bool> {
Confirm::with_theme(theme)
.with_prompt("When a previously synced repository is deleted from one endpoint, delete it everywhere?")
.default(existing.unwrap_or(true))
.interact()
.map_err(Into::into)
}
fn validate_repo_pattern(value: &str) -> std::result::Result<(), String> {
let Some(pattern) = parse_repo_pattern(value) else {
return Ok(());
@@ -913,10 +949,11 @@ fn sync_group_summary(config: &Config, mirror: &MirrorConfig) -> String {
.collect::<Vec<_>>()
.join(" <-> ");
format!(
"{} ({}, {}, {})",
"{} ({}, {}, {}, {})",
endpoints,
sync_visibility_label(&mirror.sync_visibility),
repo_filter_label(mirror),
repo_lifecycle_label(mirror),
conflict_resolution_label(&mirror.conflict_resolution)
)
}
@@ -938,6 +975,22 @@ fn repo_filter_label(mirror: &MirrorConfig) -> String {
}
}
fn repo_lifecycle_label(mirror: &MirrorConfig) -> String {
format!(
"missing: {}, deletes: {}",
if mirror.create_missing {
"create"
} else {
"skip"
},
if mirror.delete_missing {
"propagate"
} else {
"keep"
}
)
}
fn conflict_resolution_label(strategy: &ConflictResolutionStrategy) -> &'static str {
match strategy {
ConflictResolutionStrategy::Fail => "conflicts: fail",
+105 -9
View File
@@ -40,6 +40,12 @@ pub struct PullRequestInfo {
pub url: Option<String>,
}
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub enum WebhookInstallOutcome {
Created,
Existing,
}
pub fn list_mirror_repos(
config: &Config,
mirror: &MirrorConfig,
@@ -166,13 +172,26 @@ impl<'a> ProviderClient<'a> {
)
}
pub fn set_default_branch(
&self,
endpoint: &EndpointConfig,
repo_name: &str,
branch: &str,
) -> Result<()> {
dispatch_provider!(self.site.provider,
github => self.github_set_default_branch(endpoint, repo_name, branch),
gitlab => self.gitlab_set_default_branch(endpoint, repo_name, branch),
gitea_like => self.gitea_set_default_branch(endpoint, repo_name, branch),
)
}
pub fn install_webhook(
&self,
endpoint: &EndpointConfig,
repo: &RemoteRepo,
url: &str,
secret: &str,
) -> Result<()> {
) -> Result<WebhookInstallOutcome> {
dispatch_provider!(self.site.provider,
github => self.github_install_webhook(endpoint, repo, url, secret),
gitlab => self.gitlab_install_webhook(endpoint, repo, url, secret),
@@ -311,13 +330,24 @@ impl<'a> ProviderClient<'a> {
self.delete(&url).map(|_| ())
}
fn github_set_default_branch(
&self,
endpoint: &EndpointConfig,
repo_name: &str,
branch: &str,
) -> Result<()> {
let url = self.repo_url(endpoint, repo_name, "GitHub")?;
self.patch_json::<serde_json::Value>(&url, &json!({ "default_branch": branch }))
.map(|_| ())
}
fn github_install_webhook(
&self,
endpoint: &EndpointConfig,
repo: &RemoteRepo,
url: &str,
secret: &str,
) -> Result<()> {
) -> Result<WebhookInstallOutcome> {
let hooks_url = self.repo_hooks_url(endpoint, &repo.name, "GitHub")?;
let body = json!({
"name": "web",
@@ -435,7 +465,11 @@ impl<'a> ProviderClient<'a> {
projects.push(project);
}
}
Ok(projects.into_iter().map(Into::into).collect())
Ok(projects
.into_iter()
.filter(|project| !project.is_deletion_scheduled())
.map(Into::into)
.collect())
}
NamespaceKind::Org | NamespaceKind::Group => {
let encoded = urlencoding(&endpoint.namespace);
@@ -444,7 +478,12 @@ impl<'a> ProviderClient<'a> {
self.site.api_base(),
encoded
);
self.paged_remote_repos::<GitlabProject>(&url)
Ok(self
.paged_get::<GitlabProject>(&url)?
.into_iter()
.filter(|project| !project.is_deletion_scheduled())
.map(Into::into)
.collect())
}
}
}
@@ -496,6 +535,17 @@ impl<'a> ProviderClient<'a> {
self.delete(&url).map(|_| ())
}
fn gitlab_set_default_branch(
&self,
endpoint: &EndpointConfig,
repo_name: &str,
branch: &str,
) -> Result<()> {
let url = self.gitlab_project_url(endpoint, repo_name);
self.put_json::<serde_json::Value>(&url, &json!({ "default_branch": branch }))
.map(|_| ())
}
fn gitlab_group(&self, namespace: &str) -> Result<GitlabGroup> {
let url = format!("{}/groups/{}", self.site.api_base(), urlencoding(namespace));
self.get_json(&url)
@@ -526,7 +576,7 @@ impl<'a> ProviderClient<'a> {
repo: &RemoteRepo,
url: &str,
secret: &str,
) -> Result<()> {
) -> Result<WebhookInstallOutcome> {
let hooks_url = self.gitlab_hooks_url(endpoint, &repo.name);
let body = json!({
"url": url,
@@ -689,13 +739,24 @@ impl<'a> ProviderClient<'a> {
self.delete(&url).map(|_| ())
}
fn gitea_set_default_branch(
&self,
endpoint: &EndpointConfig,
repo_name: &str,
branch: &str,
) -> Result<()> {
let url = self.repo_url(endpoint, repo_name, "Gitea/Forgejo")?;
self.patch_json::<serde_json::Value>(&url, &json!({ "default_branch": branch }))
.map(|_| ())
}
fn gitea_install_webhook(
&self,
endpoint: &EndpointConfig,
repo: &RemoteRepo,
url: &str,
secret: &str,
) -> Result<()> {
) -> Result<WebhookInstallOutcome> {
let hooks_url = self.repo_hooks_url(endpoint, &repo.name, "Gitea/Forgejo")?;
let body = json!({
"type": "gitea",
@@ -875,10 +936,10 @@ impl<'a> ProviderClient<'a> {
target_url: &str,
body: &serde_json::Value,
put_on_update: bool,
) -> Result<()> {
) -> Result<WebhookInstallOutcome> {
let Some(hook) = self.find_existing_hook(hooks_url, target_url)? else {
self.post_json::<serde_json::Value>(hooks_url, body)?;
return Ok(());
return Ok(WebhookInstallOutcome::Created);
};
let update_url = format!("{hooks_url}/{}", hook.id);
@@ -887,7 +948,7 @@ impl<'a> ProviderClient<'a> {
} else {
self.patch_json::<serde_json::Value>(&update_url, body)?;
}
Ok(())
Ok(WebhookInstallOutcome::Existing)
}
fn delete_matching_hook(&self, hooks_url: &str, target_url: &str) -> Result<bool> {
@@ -1157,6 +1218,10 @@ struct GitlabProject {
http_url_to_repo: String,
visibility: String,
description: Option<String>,
marked_for_deletion_at: Option<String>,
marked_for_deletion_on: Option<String>,
#[serde(default)]
pending_delete: bool,
}
impl GitlabProject {
@@ -1184,6 +1249,37 @@ impl GitlabProject {
.eq_ignore_ascii_case(other.project_path()),
}
}
fn is_deletion_scheduled(&self) -> bool {
self.pending_delete
|| self
.marked_for_deletion_at
.as_deref()
.is_some_and(|value| !value.is_empty())
|| self
.marked_for_deletion_on
.as_deref()
.is_some_and(|value| !value.is_empty())
|| is_gitlab_deletion_scheduled_path(&self.name)
|| self
.path
.as_deref()
.is_some_and(is_gitlab_deletion_scheduled_path)
|| self
.path_with_namespace
.as_deref()
.and_then(|path| path.rsplit('/').next())
.is_some_and(is_gitlab_deletion_scheduled_path)
}
}
fn is_gitlab_deletion_scheduled_path(path: &str) -> bool {
let Some((name, project_id)) = path.rsplit_once("-deletion_scheduled-") else {
return false;
};
!name.is_empty()
&& !project_id.is_empty()
&& project_id.bytes().all(|byte| byte.is_ascii_digit())
}
#[derive(Deserialize)]
+737 -25
View File
@@ -3,6 +3,7 @@ use std::fs;
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex, mpsc};
use std::thread;
use std::time::{SystemTime, UNIX_EPOCH};
use anyhow::{Context, Result, bail};
use console::style;
@@ -10,11 +11,11 @@ use regex::Regex;
use crate::config::{
Config, ConflictResolutionStrategy, DEFAULT_JOBS, EndpointConfig, MirrorConfig, NamespaceKind,
RepoNameFilter, SyncVisibility, Visibility, default_work_dir, validate_config,
ProviderKind, RepoNameFilter, SyncVisibility, Visibility, default_work_dir, validate_config,
};
use crate::git::{
BranchConflict, BranchDeletion, BranchUpdate, GitMirror, Redactor, RemoteSpec,
is_disabled_repository_error, ls_remote_refs, safe_remote_name,
BranchConflict, BranchDeletion, BranchUpdate, GitMirror, Redactor, RefBackup, RemoteSpec,
is_disabled_repository_error, is_missing_repository_error, ls_remote_refs, safe_remote_name,
};
use crate::logging;
use crate::provider::{
@@ -37,6 +38,7 @@ use self::state::{
};
const CONFLICT_BRANCH_ROOT: &str = "refray/conflicts/";
const DEFAULT_BRANCH: &str = "main";
#[derive(Clone, Debug)]
pub struct SyncOptions {
@@ -139,6 +141,83 @@ pub fn sync_all(config: &Config, options: SyncOptions) -> Result<()> {
Ok(())
}
pub fn sync_webhook_repo(
config: &Config,
group: &str,
repo_name: &str,
work_dir: Option<PathBuf>,
jobs: usize,
) -> Result<()> {
validate_config(config)?;
if jobs == 0 {
bail!("jobs must be at least 1");
}
let work_dir = work_dir.unwrap_or_else(default_work_dir);
fs::create_dir_all(&work_dir)
.with_context(|| format!("failed to create {}", work_dir.display()))?;
let mirror = config
.mirrors
.iter()
.find(|mirror| mirror.name == group)
.with_context(|| format!("no mirror group matched '{group}'"))?;
let repo_filter = mirror.repo_filter()?;
if !repo_filter.matches(repo_name) {
crate::logln!(
" {} {} does not match configured repository filters",
style("skip").yellow().bold(),
style(repo_name).cyan()
);
return Ok(());
}
let tokens = config
.sites
.iter()
.map(|site| site.token())
.collect::<Result<Vec<_>>>()?;
let redactor = Redactor::new(tokens);
let mut ref_state = load_ref_state(&work_dir)?;
crate::logln!();
crate::logln!(
"{} {}",
style("Mirror group").cyan().bold(),
style(&mirror.name).bold()
);
let mut repos = targeted_endpoint_repos(config, mirror, repo_name)?;
let context = RepoSyncContext {
config,
mirror,
work_dir: &work_dir,
redactor,
dry_run: false,
jobs,
};
let outcome = sync_assumed_repo(
&context,
repo_name,
&mut repos,
mirror.create_missing,
&ref_state,
)?;
if !outcome.created_repos.is_empty() {
webhook::ensure_configured_webhooks(
config,
mirror,
&outcome.created_repos,
&work_dir,
jobs,
)?;
}
if let Some(update) = outcome.state_update {
match update {
RepoStateUpdate::Set(refs) => ref_state.set_repo(&mirror.name, repo_name, refs),
RepoStateUpdate::Remove => ref_state.remove_repo(&mirror.name, repo_name),
}
save_ref_state(&work_dir, &ref_state)?;
}
Ok(())
}
struct GroupSyncContext<'a> {
config: &'a Config,
options: &'a SyncOptions,
@@ -412,7 +491,7 @@ fn ensure_missing_repos(
repo_name: &str,
existing: &mut Vec<EndpointRepo>,
create_missing: bool,
) -> Result<()> {
) -> Result<Vec<EndpointRepo>> {
let present = existing
.iter()
.map(|repo| repo.endpoint.clone())
@@ -446,13 +525,19 @@ fn ensure_missing_repos(
style(format!("on {}", endpoint.label())).dim()
);
}
return Ok(());
return Ok(Vec::new());
}
let description = template.and_then(|repo| repo.description);
let expected_private = matches!(create_visibility, Visibility::Private);
let create_jobs = missing.into_iter().enumerate().collect::<Vec<_>>();
let mut created = crate::parallel::map(create_jobs, context.jobs, |(index, endpoint)| {
let created = crate::parallel::map(create_jobs, context.jobs, |(index, endpoint)| {
let site = context.config.site(&endpoint.site).unwrap();
if site.provider == ProviderKind::Gitlab && !is_supported_gitlab_project_path(repo_name) {
log_invalid_gitlab_project_name_skip(repo_name, &endpoint);
return Ok(None);
}
crate::logln!(
" {} {} {}",
style("create").green().bold(),
@@ -460,16 +545,27 @@ fn ensure_missing_repos(
style(format!("on {}", endpoint.label())).dim()
);
let site = context.config.site(&endpoint.site).unwrap();
let client = ProviderClient::new(site)?;
let created = client
.create_repo(
&endpoint,
repo_name,
&create_visibility,
description.as_deref(),
)
.with_context(|| format!("failed to create {} on {}", repo_name, endpoint.label()))?;
let created = match client.create_repo(
&endpoint,
repo_name,
&create_visibility,
description.as_deref(),
) {
Ok(created) => created,
Err(error)
if site.provider == ProviderKind::Gitlab
&& is_gitlab_invalid_project_name_error(&error) =>
{
log_invalid_gitlab_project_name_skip(repo_name, &endpoint);
return Ok(None);
}
Err(error) => {
return Err(error).with_context(|| {
format!("failed to create {} on {}", repo_name, endpoint.label())
});
}
};
if created.private != expected_private {
crate::logln!(
" {} created {} on {}, but provider reported a different visibility than requested",
@@ -478,18 +574,23 @@ fn ensure_missing_repos(
style(endpoint.label()).dim()
);
}
Ok((
Ok(Some((
index,
EndpointRepo {
endpoint,
repo: created,
},
))
)))
})?;
let mut created = created.into_iter().flatten().collect::<Vec<_>>();
created.sort_by_key(|(index, _)| *index);
existing.extend(created.into_iter().map(|(_, repo)| repo));
let created = created
.into_iter()
.map(|(_, repo)| repo)
.collect::<Vec<_>>();
existing.extend(created.clone());
Ok(())
Ok(created)
}
fn visibility_for_created_repo(mirror: &MirrorConfig, template: Option<&RemoteRepo>) -> Visibility {
@@ -504,6 +605,50 @@ fn visibility_for_created_repo(mirror: &MirrorConfig, template: Option<&RemoteRe
.unwrap_or_else(|| mirror.visibility.clone())
}
fn is_supported_gitlab_project_path(name: &str) -> bool {
if name.is_empty()
|| matches!(name.chars().next(), Some('-' | '_' | '.'))
|| matches!(name.chars().last(), Some('-' | '_' | '.'))
{
return false;
}
let lower = name.to_ascii_lowercase();
if lower.ends_with(".git") || lower.ends_with(".atom") {
return false;
}
name.chars()
.all(|ch| ch.is_ascii_alphanumeric() || matches!(ch, '_' | '-' | '.'))
}
fn log_invalid_gitlab_project_name_skip(repo_name: &str, endpoint: &EndpointConfig) {
crate::logln!(
" {} {} {}",
style("skip").yellow().bold(),
style(repo_name).cyan(),
style(format!(
"on {}: invalid GitLab project name/path",
endpoint.label()
))
.dim()
);
}
fn is_gitlab_invalid_project_name_error(error: &anyhow::Error) -> bool {
let text = error
.chain()
.map(ToString::to_string)
.collect::<Vec<_>>()
.join("\n")
.to_ascii_lowercase();
text.contains("400 bad request")
&& (text.contains("project_namespace.path")
|| text.contains("can only include non-accented letters")
|| text.contains("must not start with")
|| text.contains("must start with a letter"))
}
struct RepoSyncContext<'a> {
config: &'a Config,
mirror: &'a MirrorConfig,
@@ -516,6 +661,7 @@ struct RepoSyncContext<'a> {
#[derive(Default)]
struct RepoSyncOutcome {
state_update: Option<RepoStateUpdate>,
created_repos: Vec<EndpointRepo>,
}
enum RepoStateUpdate {
@@ -523,6 +669,53 @@ enum RepoStateUpdate {
Remove,
}
fn mirror_repo_path(context: &RepoSyncContext<'_>, repo_name: &str) -> PathBuf {
context
.work_dir
.join(safe_remote_name(&context.mirror.name))
.join(format!("{}.git", safe_remote_name(repo_name)))
}
fn targeted_endpoint_repos(
config: &Config,
mirror: &MirrorConfig,
repo_name: &str,
) -> Result<Vec<EndpointRepo>> {
mirror
.endpoints
.iter()
.map(|endpoint| {
let site = config.site(&endpoint.site).unwrap();
Ok(EndpointRepo {
endpoint: endpoint.clone(),
repo: RemoteRepo {
name: repo_name.to_string(),
clone_url: endpoint_clone_url(site, endpoint, repo_name)?,
private: matches!(mirror.visibility, Visibility::Private),
description: None,
},
})
})
.collect()
}
fn endpoint_clone_url(
site: &crate::config::SiteConfig,
endpoint: &EndpointConfig,
repo_name: &str,
) -> Result<String> {
let mut url = url::Url::parse(&site.base_url)
.with_context(|| format!("invalid base URL for site '{}'", site.name))?;
let base_path = url.path().trim_end_matches('/');
let repo_path = format!("{}/{}.git", endpoint.namespace.trim_matches('/'), repo_name);
if base_path.is_empty() {
url.set_path(&repo_path);
} else {
url.set_path(&format!("{base_path}/{repo_path}"));
}
Ok(url.to_string())
}
fn sync_repo(
context: &RepoSyncContext<'_>,
repo_name: &str,
@@ -573,14 +766,18 @@ fn sync_repo(
return Ok(RepoSyncOutcome::default());
}
let path = context
.work_dir
.join(safe_remote_name(&context.mirror.name))
.join(format!("{}.git", safe_remote_name(repo_name)));
let path = mirror_repo_path(context, repo_name);
let mirror_repo = GitMirror::open(path, context.redactor.clone(), context.dry_run)?;
mirror_repo.configure_remotes(&initial_remotes)?;
let cached_ref_state = cached_ref_state(&mirror_repo, &initial_remotes)?;
backup_branches_deleted_everywhere(
context,
&mirror_repo,
repo_name,
detailed_repo_ref_state(previous_repo_refs).or(cached_ref_state.as_ref()),
&initial_ref_state,
)?;
for remote in &initial_remotes {
if let Err(error) = mirror_repo.fetch_remote(remote) {
if is_disabled_repository_error(&error) {
@@ -596,7 +793,7 @@ fn sync_repo(
}
}
ensure_missing_repos(context, repo_name, repos, create_missing)?;
let created_repos = ensure_missing_repos(context, repo_name, repos, create_missing)?;
if repos.len() < 2 {
crate::logln!(
@@ -635,6 +832,7 @@ fn sync_repo(
let result = push_repo_refs(
context,
&mirror_repo,
repo_name,
&remotes,
repos,
detailed_repo_ref_state(previous_repo_refs).or(cached_ref_state.as_ref()),
@@ -645,15 +843,193 @@ fn sync_repo(
let Some(refs) = check_remote_refs(context, repo_name, &remotes)? else {
return Ok(RepoSyncOutcome::default());
};
set_default_branch_for_created_repos(context, repo_name, &created_repos, &refs)?;
refs
} else {
initial_ref_state
};
return Ok(RepoSyncOutcome {
state_update: Some(RepoStateUpdate::Set(refs)),
created_repos,
});
}
Ok(RepoSyncOutcome::default())
Ok(RepoSyncOutcome {
created_repos,
..RepoSyncOutcome::default()
})
}
fn set_default_branch_for_created_repos(
context: &RepoSyncContext<'_>,
repo_name: &str,
created_repos: &[EndpointRepo],
refs: &BTreeMap<String, RemoteRefState>,
) -> Result<()> {
if created_repos.is_empty() {
return Ok(());
}
let targets = created_repos
.iter()
.filter(|repo| {
refs.get(&remote_name_for_endpoint_repo(repo))
.is_some_and(|refs| refs.branches.contains_key(DEFAULT_BRANCH))
})
.cloned()
.collect::<Vec<_>>();
crate::parallel::map(targets, context.jobs, |repo| {
crate::logln!(
" {} branch {} {}",
style("default").green().bold(),
style(DEFAULT_BRANCH).cyan(),
style(format!("on {}", repo.endpoint.label())).dim()
);
let site = context.config.site(&repo.endpoint.site).unwrap();
ProviderClient::new(site)?
.set_default_branch(&repo.endpoint, repo_name, DEFAULT_BRANCH)
.with_context(|| {
format!(
"failed to set default branch for {} on {}",
repo_name,
repo.endpoint.label()
)
})?;
Ok(())
})?;
Ok(())
}
fn sync_assumed_repo(
context: &RepoSyncContext<'_>,
repo_name: &str,
repos: &mut [EndpointRepo],
create_missing: bool,
ref_state: &RefState,
) -> Result<RepoSyncOutcome> {
crate::logln!();
crate::logln!(
"{} {}",
style("Repo").magenta().bold(),
style(repo_name).bold()
);
let previous_repo_refs = ref_state.repo(&context.mirror.name, repo_name);
let all_remotes = remote_specs(context, repos)?;
let Some(initial_ref_check) = check_assumed_remote_refs(context, repo_name, &all_remotes)?
else {
return Ok(RepoSyncOutcome::default());
};
if initial_ref_check.refs.is_empty() {
crate::logln!(
" {} {}",
style("skip").yellow().bold(),
style("repository not found on any endpoint").dim()
);
return Ok(RepoSyncOutcome::default());
}
let existing_remote_names = initial_ref_check
.refs
.keys()
.cloned()
.collect::<BTreeSet<_>>();
let mut existing_repos = repos
.iter()
.filter(|repo| existing_remote_names.contains(&remote_name_for_endpoint_repo(repo)))
.cloned()
.collect::<Vec<_>>();
let existing_remotes = all_remotes
.iter()
.filter(|remote| existing_remote_names.contains(&remote.name))
.cloned()
.collect::<Vec<_>>();
let path = mirror_repo_path(context, repo_name);
let mirror_repo = GitMirror::open(path, context.redactor.clone(), context.dry_run)?;
mirror_repo.configure_remotes(&all_remotes)?;
let cached_ref_state = cached_ref_state(&mirror_repo, &existing_remotes)?;
backup_branches_deleted_everywhere(
context,
&mirror_repo,
repo_name,
detailed_repo_ref_state(previous_repo_refs).or(cached_ref_state.as_ref()),
&initial_ref_check.refs,
)?;
for remote in &existing_remotes {
if let Err(error) = mirror_repo.fetch_remote(remote) {
if is_disabled_repository_error(&error) {
crate::logln!(
" {} {} {}",
style("skip").yellow().bold(),
style(repo_name).cyan(),
style(format!("provider blocked access on {}", remote.display)).dim()
);
return Ok(RepoSyncOutcome::default());
}
if is_missing_repository_error(&error) {
crate::logln!(
" {} {} {}",
style("missing").yellow().bold(),
style(repo_name).cyan(),
style(format!("on {}", remote.display)).dim()
);
existing_repos.retain(|repo| remote_name_for_endpoint_repo(repo) != remote.name);
continue;
}
return Err(error).with_context(|| format!("failed to fetch {}", remote.display));
}
}
let created_repos =
ensure_missing_repos(context, repo_name, &mut existing_repos, create_missing)?;
if existing_repos.len() < 2 {
crate::logln!(
" {} {} {}",
style("skip").yellow().bold(),
style(repo_name).cyan(),
style("fewer than two endpoints have this repository").dim()
);
return Ok(RepoSyncOutcome {
created_repos,
..RepoSyncOutcome::default()
});
}
let remotes = remote_specs(context, &existing_repos)?;
mirror_repo.configure_remotes(&remotes)?;
let result = push_repo_refs(
context,
&mirror_repo,
repo_name,
&remotes,
&existing_repos,
detailed_repo_ref_state(previous_repo_refs).or(cached_ref_state.as_ref()),
&initial_ref_check.refs,
)?;
if !context.dry_run && !result.had_conflicts {
let refs = if result.pushed {
let Some(refs) = check_remote_refs(context, repo_name, &remotes)? else {
return Ok(RepoSyncOutcome {
created_repos,
..RepoSyncOutcome::default()
});
};
set_default_branch_for_created_repos(context, repo_name, &created_repos, &refs)?;
refs
} else {
initial_ref_check.refs
};
return Ok(RepoSyncOutcome {
state_update: Some(RepoStateUpdate::Set(refs)),
created_repos,
});
}
Ok(RepoSyncOutcome {
created_repos,
..RepoSyncOutcome::default()
})
}
fn handle_repo_deletion(
@@ -682,8 +1058,10 @@ fn handle_repo_deletion(
style(repo_name).cyan(),
deleted_remotes.join("+")
);
backup_deleted_repo(context, repo_name, repos, previous_refs, current_refs)?;
Ok(Some(RepoSyncOutcome {
state_update: (!context.dry_run).then_some(RepoStateUpdate::Remove),
..RepoSyncOutcome::default()
}))
}
RepoDeletionDecision::Propagate {
@@ -697,9 +1075,11 @@ fn handle_repo_deletion(
deleted_remotes.join("+"),
target_remotes.join("+")
);
backup_deleted_repo(context, repo_name, repos, previous_refs, current_refs)?;
delete_repos(context, repo_name, repos, &target_remotes)?;
Ok(Some(RepoSyncOutcome {
state_update: (!context.dry_run).then_some(RepoStateUpdate::Remove),
..RepoSyncOutcome::default()
}))
}
RepoDeletionDecision::Conflict {
@@ -720,6 +1100,65 @@ fn handle_repo_deletion(
}
}
fn backup_deleted_repo(
context: &RepoSyncContext<'_>,
repo_name: &str,
repos: &[EndpointRepo],
previous_refs: Option<&BTreeMap<String, RemoteRefState>>,
current_refs: &BTreeMap<String, RemoteRefState>,
) -> Result<()> {
if context.dry_run {
crate::logln!(
" {} {} {}",
style("dry-run").yellow().bold(),
style("would create local backup for deleted repo").dim(),
style(repo_name).cyan()
);
return Ok(());
}
let path = mirror_repo_path(context, repo_name);
if repos.is_empty() && !path.exists() {
bail!(
"cannot back up deleted repo {} because local mirror cache {} is missing",
repo_name,
path.display()
);
}
let mirror_repo = GitMirror::open(path, context.redactor.clone(), false)?;
if !repos.is_empty() {
let remotes = remote_specs(context, repos)?;
mirror_repo.configure_remotes(&remotes)?;
for remote in &remotes {
mirror_repo.fetch_remote(remote).with_context(|| {
format!("failed to fetch {} for deletion backup", remote.display)
})?;
}
}
let stamp = backup_stamp()?;
let refs_to_backup = if current_refs.is_empty() {
previous_refs.unwrap_or(current_refs)
} else {
current_refs
};
let backups = repo_ref_backups(repo_name, refs_to_backup, &stamp);
if backups.is_empty() {
crate::logln!(
" {} {} has no refs to bundle before deletion",
style("backup").yellow().bold(),
style(repo_name).cyan()
);
return Ok(());
}
let refs = mirror_repo.backup_refs(&backups)?;
let bundle_path = backup_dir(context, repo_name).join(format!("repo-{stamp}.bundle"));
mirror_repo.create_bundle(&bundle_path, &refs)?;
Ok(())
}
fn delete_repos(
context: &RepoSyncContext<'_>,
repo_name: &str,
@@ -846,6 +1285,69 @@ fn check_remote_refs(
Ok(Some(refs))
}
struct AssumedRemoteRefState {
refs: BTreeMap<String, RemoteRefState>,
}
fn check_assumed_remote_refs(
context: &RepoSyncContext<'_>,
repo_name: &str,
remotes: &[RemoteSpec],
) -> Result<Option<AssumedRemoteRefState>> {
enum RemoteRefCheck {
Found(String, RemoteRefState),
Missing(String),
Blocked,
}
let ref_jobs = remotes.to_vec();
let results = crate::parallel::map(ref_jobs, context.jobs, |remote| {
crate::logln!(
" {} {}",
style("probe refs").cyan().bold(),
style(&remote.display).dim()
);
match ls_remote_refs(&remote, &context.redactor) {
Ok(snapshot) => Ok(RemoteRefCheck::Found(remote.name, snapshot.into())),
Err(error) if is_missing_repository_error(&error) => {
crate::logln!(
" {} {} {}",
style("missing").yellow().bold(),
style(repo_name).cyan(),
style(format!("on {}", remote.display)).dim()
);
Ok(RemoteRefCheck::Missing(remote.name))
}
Err(error) if is_disabled_repository_error(&error) => {
crate::logln!(
" {} {} {}",
style("skip").yellow().bold(),
style(repo_name).cyan(),
style(format!("provider blocked access on {}", remote.display)).dim()
);
Ok(RemoteRefCheck::Blocked)
}
Err(error) => {
Err(error).with_context(|| format!("failed to check refs for {}", remote.display))
}
}
})?;
let mut refs = BTreeMap::new();
for result in results {
match result {
RemoteRefCheck::Found(remote, refs_for_remote) => {
refs.insert(remote, refs_for_remote);
}
RemoteRefCheck::Missing(remote) => {
let _ = remote;
}
RemoteRefCheck::Blocked => return Ok(None),
}
}
Ok(Some(AssumedRemoteRefState { refs }))
}
fn remote_specs(context: &RepoSyncContext<'_>, repos: &[EndpointRepo]) -> Result<Vec<RemoteSpec>> {
let endpoint_map = context
.mirror
@@ -874,6 +1376,7 @@ fn remote_specs(context: &RepoSyncContext<'_>, repos: &[EndpointRepo]) -> Result
fn push_repo_refs(
context: &RepoSyncContext<'_>,
mirror_repo: &GitMirror,
repo_name: &str,
remotes: &[RemoteSpec],
repos: &[EndpointRepo],
previous_refs: Option<&BTreeMap<String, RemoteRefState>>,
@@ -972,6 +1475,13 @@ fn push_repo_refs(
{
if !branch_deletions.is_empty() {
print_branch_deletions(&branch_deletions);
backup_deleted_branches(
context,
mirror_repo,
repo_name,
&branch_deletions,
current_refs,
)?;
mirror_repo.delete_branches(remotes, &branch_deletions)?;
}
if !cleanup_branches.is_empty() {
@@ -994,6 +1504,13 @@ fn push_repo_refs(
}
if !branch_deletions.is_empty() {
print_branch_deletions(&branch_deletions);
backup_deleted_branches(
context,
mirror_repo,
repo_name,
&branch_deletions,
current_refs,
)?;
mirror_repo.delete_branches(remotes, &branch_deletions)?;
}
if !branches_to_push.is_empty() {
@@ -1031,6 +1548,64 @@ fn push_repo_refs(
})
}
fn backup_deleted_branches(
context: &RepoSyncContext<'_>,
mirror_repo: &GitMirror,
repo_name: &str,
deletions: &[BranchDeletion],
current_refs: &BTreeMap<String, RemoteRefState>,
) -> Result<()> {
if context.dry_run {
crate::logln!(
" {} {} deleted branch backup{}",
style("dry-run").yellow().bold(),
style("would create").dim(),
if deletions.len() == 1 { "" } else { "s" }
);
return Ok(());
}
let stamp = backup_stamp()?;
let backups = branch_ref_backups(deletions, current_refs, &stamp);
if backups.is_empty() {
bail!("cannot back up branch deletion because no target branch refs were available");
}
let refs = mirror_repo.backup_refs(&backups)?;
let bundle_path = backup_dir(context, repo_name).join(format!("branches-{stamp}.bundle"));
mirror_repo.create_bundle(&bundle_path, &refs)?;
Ok(())
}
fn backup_branches_deleted_everywhere(
context: &RepoSyncContext<'_>,
mirror_repo: &GitMirror,
repo_name: &str,
previous_refs: Option<&BTreeMap<String, RemoteRefState>>,
current_refs: &BTreeMap<String, RemoteRefState>,
) -> Result<()> {
let Some(previous_refs) = previous_refs else {
return Ok(());
};
let stamp = backup_stamp()?;
let backups = branches_deleted_everywhere_backups(previous_refs, current_refs, &stamp);
if backups.is_empty() {
return Ok(());
}
if context.dry_run {
crate::logln!(
" {} {} branch backup{} for refs deleted everywhere",
style("dry-run").yellow().bold(),
style("would create").dim(),
if backups.len() == 1 { "" } else { "s" }
);
return Ok(());
}
let refs = mirror_repo.backup_refs(&backups)?;
let bundle_path = backup_dir(context, repo_name).join(format!("branches-{stamp}.bundle"));
mirror_repo.create_bundle(&bundle_path, &refs)?;
Ok(())
}
enum BranchConflictResolution {
Rebased(Vec<BranchUpdate>),
PullRequest(BranchConflict),
@@ -1342,6 +1917,140 @@ fn conflict_pr_base_branch(branch: &str) -> Option<String> {
decode_hex_component(encoded)
}
fn backup_dir(context: &RepoSyncContext<'_>, repo_name: &str) -> PathBuf {
context
.work_dir
.join("backups")
.join(safe_remote_name(&context.mirror.name))
.join(safe_remote_name(repo_name))
}
fn backup_stamp() -> Result<String> {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.with_context(|| "system clock is before UNIX_EPOCH")?;
Ok(format!("{}-{:09}", now.as_secs(), now.subsec_nanos()))
}
fn branch_ref_backups(
deletions: &[BranchDeletion],
current_refs: &BTreeMap<String, RemoteRefState>,
stamp: &str,
) -> Vec<RefBackup> {
let mut backups = Vec::new();
let mut seen = BTreeSet::new();
for deletion in deletions {
for remote in &deletion.target_remotes {
let Some(sha) = current_refs
.get(remote)
.and_then(|refs| refs.branches.get(&deletion.branch))
else {
continue;
};
if !seen.insert((deletion.branch.clone(), sha.clone())) {
continue;
}
backups.push(RefBackup {
refname: format!(
"refs/refray-backups/branches/{}/{}/{}",
hex_component(&deletion.branch),
stamp,
hex_component(remote)
),
sha: sha.clone(),
description: format!(
"branch {} from {} before propagated deletion",
deletion.branch, remote
),
});
}
}
backups
}
fn branches_deleted_everywhere_backups(
previous_refs: &BTreeMap<String, RemoteRefState>,
current_refs: &BTreeMap<String, RemoteRefState>,
stamp: &str,
) -> Vec<RefBackup> {
let mut branches = BTreeSet::new();
for refs in previous_refs.values() {
branches.extend(
refs.branches
.keys()
.filter(|branch| !is_internal_conflict_branch(branch))
.cloned(),
);
}
let mut backups = Vec::new();
for branch in branches {
if current_refs
.values()
.any(|refs| refs.branches.contains_key(&branch))
{
continue;
}
let mut seen_shas = BTreeSet::new();
for (remote, refs) in previous_refs {
let Some(sha) = refs.branches.get(&branch) else {
continue;
};
if !seen_shas.insert(sha.clone()) {
continue;
}
backups.push(RefBackup {
refname: format!(
"refs/refray-backups/branches/{}/{}/deleted-everywhere-{}",
hex_component(&branch),
stamp,
hex_component(remote)
),
sha: sha.clone(),
description: format!(
"branch {branch} from {remote} before all endpoints pruned it"
),
});
}
}
backups
}
fn repo_ref_backups(
repo_name: &str,
refs_by_remote: &BTreeMap<String, RemoteRefState>,
stamp: &str,
) -> Vec<RefBackup> {
let mut backups = Vec::new();
for (remote, refs) in refs_by_remote {
for (branch, sha) in &refs.branches {
backups.push(RefBackup {
refname: format!(
"refs/refray-backups/repos/{}/{}/heads/{}",
stamp,
hex_component(remote),
hex_component(branch)
),
sha: sha.clone(),
description: format!("repo {repo_name} branch {branch} from {remote}"),
});
}
for (tag, sha) in &refs.tags {
backups.push(RefBackup {
refname: format!(
"refs/refray-backups/repos/{}/{}/tags/{}",
stamp,
hex_component(remote),
hex_component(tag)
),
sha: sha.clone(),
description: format!("repo {repo_name} tag {tag} from {remote}"),
});
}
}
backups
}
fn hex_component(value: &str) -> String {
const HEX: &[u8; 16] = b"0123456789abcdef";
let mut output = String::with_capacity(value.len() * 2);
@@ -1504,6 +2213,9 @@ fn repo_deletion_decision(
previous_refs: Option<&BTreeMap<String, RemoteRefState>>,
current_refs: &BTreeMap<String, RemoteRefState>,
) -> RepoDeletionDecision {
if !mirror.delete_missing {
return RepoDeletionDecision::None;
}
let Some(previous_refs) = previous_refs else {
return RepoDeletionDecision::None;
};
+64 -58
View File
@@ -8,7 +8,6 @@ use std::time::Duration;
use anyhow::{Context, Result, bail};
use console::style;
use hmac::{Hmac, KeyInit, Mac};
use regex::escape;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use sha2::Sha256;
@@ -18,9 +17,11 @@ use crate::config::{
Config, EndpointConfig, MirrorConfig, ProviderKind, RepoNameFilter, default_work_dir,
validate_config,
};
use crate::provider::{EndpointRepo, ProviderClient, RemoteRepo, list_mirror_repos};
use crate::provider::{
EndpointRepo, ProviderClient, RemoteRepo, WebhookInstallOutcome, list_mirror_repos,
};
use crate::state::{load_toml_or_default, save_toml};
use crate::sync::{SyncOptions, sync_all};
use crate::sync::{SyncOptions, sync_all, sync_webhook_repo};
type HmacSha256 = Hmac<Sha256>;
const WEBHOOK_STATE_FILE: &str = "webhook-state.toml";
@@ -151,6 +152,7 @@ fn full_sync_timer_loop(
&config,
SyncOptions {
work_dir: work_dir.clone(),
jobs: config.jobs,
..SyncOptions::default()
},
) {
@@ -375,15 +377,12 @@ fn worker_loop(
let _sync_guard = sync_lock
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
let result = sync_all(
let result = sync_webhook_repo(
&config,
SyncOptions {
group: Some(job.group.clone()),
repo_pattern: Some(format!("^{}$", escape(&job.repo))),
work_dir: work_dir.clone(),
jobs: 1,
..SyncOptions::default()
},
&job.group,
&job.repo,
work_dir.clone(),
config.jobs,
);
match result {
Ok(()) => crate::logln!(
@@ -575,66 +574,73 @@ fn run_uninstall_tasks(tasks: Vec<WebhookUninstallTask>, jobs: usize) -> Result<
fn install_webhook_task(task: WebhookInstallTask, state: &Arc<Mutex<WebhookState>>) -> Result<()> {
let key = webhook_installation_key(&task.group, &task.endpoint, &task.repo.name);
crate::logln!(
" {} {} {}",
style(if task.dry_run {
"would install"
} else {
"install"
})
.green()
.bold(),
style(&task.repo.name).cyan(),
style(format!("webhook on {}", task.endpoint.label())).dim()
);
if task.dry_run {
crate::logln!(
" {} {} {}",
style("would install").green().bold(),
style(&task.repo.name).cyan(),
style(format!("webhook on {}", task.endpoint.label())).dim()
);
return Ok(());
}
let client = ProviderClient::new(&task.site)?;
if let Err(error) = client.install_webhook(&task.endpoint, &task.repo, &task.url, &task.secret)
{
if is_duplicate_webhook_error(&error) {
match client.install_webhook(&task.endpoint, &task.repo, &task.url, &task.secret) {
Ok(outcome) => {
let action = match outcome {
WebhookInstallOutcome::Created => "install",
WebhookInstallOutcome::Existing => "exists",
};
crate::logln!(
" {} {} {}",
style("exists").green().bold(),
style(action).green().bold(),
style(&task.repo.name).cyan(),
style(format!("webhook on {}", task.endpoint.label())).dim()
);
record_webhook_installation(state, key, task);
return Ok(());
Ok(())
}
if let Some(reason) = non_actionable_webhook_failure_reason(&error) {
crate::logln!(
" {} {} {}",
style("skip").yellow().bold(),
style(&task.repo.name).cyan(),
style(format!("webhook on {}: {reason}", task.endpoint.label())).dim()
);
let mut state = state
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
state.skipped.insert(
key,
SkippedWebhookInstallation {
group: task.group,
endpoint: task.endpoint,
repo: task.repo.name,
url: task.url,
reason,
},
);
return Ok(());
Err(error) => {
if is_duplicate_webhook_error(&error) {
crate::logln!(
" {} {} {}",
style("exists").green().bold(),
style(&task.repo.name).cyan(),
style(format!("webhook on {}", task.endpoint.label())).dim()
);
record_webhook_installation(state, key, task);
return Ok(());
}
if let Some(reason) = non_actionable_webhook_failure_reason(&error) {
crate::logln!(
" {} {} {}",
style("skip").yellow().bold(),
style(&task.repo.name).cyan(),
style(format!("webhook on {}: {reason}", task.endpoint.label())).dim()
);
let mut state = state
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
state.skipped.insert(
key,
SkippedWebhookInstallation {
group: task.group,
endpoint: task.endpoint,
repo: task.repo.name,
url: task.url,
reason,
},
);
return Ok(());
}
Err(error).with_context(|| {
format!(
"failed to install webhook for {} on {}",
task.repo.name,
task.endpoint.label()
)
})
}
return Err(error).with_context(|| {
format!(
"failed to install webhook for {} on {}",
task.repo.name,
task.endpoint.label()
)
});
}
record_webhook_installation(state, key, task);
Ok(())
}
fn record_webhook_installation(
+94 -3
View File
@@ -341,6 +341,7 @@ name = "all"
sync_visibility = "all"
repo_whitelist = '{}'
create_missing = {}
delete_missing = true
visibility = "public"
conflict_resolution = "{}"
@@ -379,9 +380,10 @@ namespace = "{}"
git(&work, ["tag", "v1.0.0"])?;
let remote_url = source.authenticated_repo_url(&repo)?;
self.git(&work, ["remote", "add", "origin", &remote_url])?;
self.git(
self.git_retry(
&work,
["push", "origin", "HEAD:main", "feature/github", "v1.0.0"],
"initial seed push",
)?;
source.wait_branch(
&repo,
@@ -392,6 +394,7 @@ namespace = "{}"
source.wait_repo_listed(&repo)?;
self.sync_repo(&repo, [])?;
self.assert_branch_all_equal_after_optional_resync(&repo, MAIN_BRANCH)?;
self.assert_default_branch_all_except(&repo, MAIN_BRANCH, &source.site_name)?;
self.assert_branch_all_equal(&repo, "feature/github")?;
self.assert_tag_all_equal(&repo, "v1.0.0")?;
@@ -615,6 +618,7 @@ namespace = "{}"
source.wait_branch_absent(&repo, "delete-me")?;
self.sync_repo(&repo, [])?;
self.assert_branch_absent_everywhere(&repo, "delete-me")?;
self.assert_backup_bundle_contains(&repo, "refs/refray-backups/branches/")?;
Ok(())
}
@@ -629,6 +633,7 @@ namespace = "{}"
source.wait_repo_absent(&repo)?;
self.sync_repo(&repo, [])?;
self.assert_repo_absent_everywhere(&repo)?;
self.assert_backup_bundle_contains(&repo, "refs/refray-backups/repos/")?;
Ok(())
}
@@ -696,7 +701,7 @@ namespace = "{}"
)?;
let remote_url = provider.authenticated_repo_url(repo)?;
self.git(&work, ["remote", "add", "origin", &remote_url])?;
self.git(&work, ["push", "origin", "HEAD:main"])?;
self.git_retry(&work, ["push", "origin", "HEAD:main"], "seed push")?;
provider.wait_branch(
repo,
MAIN_BRANCH,
@@ -723,7 +728,11 @@ namespace = "{}"
for provider in &self.settings.providers {
let remote_url = provider.authenticated_repo_url(repo)?;
self.git(&work, ["remote", "add", &provider.site_name, &remote_url])?;
self.git(&work, ["push", &provider.site_name, "HEAD:main"])?;
self.git_retry(
&work,
["push", &provider.site_name, "HEAD:main"],
"seed-all push",
)?;
provider.wait_branch(repo, MAIN_BRANCH, &sha)?;
provider.wait_repo_listed(repo)?;
provider.unprotect_branch(repo, MAIN_BRANCH)?;
@@ -784,6 +793,10 @@ namespace = "{}"
assert_output_success(output, "git", &self.redactor)
}
fn git_retry<const N: usize>(&self, path: &Path, args: [&str; N], label: &str) -> Result<()> {
retry(label, || self.git(path, args))
}
fn set_repo_whitelist(&self, pattern: &str) -> Result<()> {
let contents = fs::read_to_string(&self.config_path)
.with_context(|| format!("failed to read {}", self.config_path.display()))?;
@@ -1061,6 +1074,30 @@ namespace = "{}"
}
}
fn assert_default_branch_all_except(
&self,
repo: &str,
branch: &str,
excluded_site: &str,
) -> Result<()> {
retry("default branch metadata", || {
for provider in &self.settings.providers {
if provider.site_name == excluded_site {
continue;
}
let actual = provider.default_branch(repo)?;
if actual.as_deref() != Some(branch) {
bail!(
"expected default branch {branch} on {} for {repo}, got {:?}",
provider.site_name,
actual
);
}
}
Ok(())
})
}
fn assert_tag_all_equal(&self, repo: &str, tag: &str) -> Result<()> {
retry("tag convergence", || {
let refs = self.refs_by_provider(repo)?;
@@ -1090,6 +1127,29 @@ namespace = "{}"
})
}
fn assert_backup_bundle_contains(&self, repo: &str, marker: &str) -> Result<()> {
let bundles = self.backup_bundles_for_repo(repo)?;
for bundle in &bundles {
let output = Command::new("git")
.args(["bundle", "list-heads", bundle.to_str().unwrap()])
.output()
.context("failed to run git bundle list-heads")?;
if output.status.success() && String::from_utf8_lossy(&output.stdout).contains(marker) {
return Ok(());
}
}
bail!(
"no local backup bundle for {repo} contained {marker}; checked {:?}",
bundles
)
}
fn backup_bundles_for_repo(&self, repo: &str) -> Result<Vec<PathBuf>> {
let mut bundles = Vec::new();
collect_backup_bundles(&self.cache_home, repo, &mut bundles)?;
Ok(bundles)
}
fn assert_conflict_branch_exists(&self, repo: &str) -> Result<()> {
retry("conflict branch", || {
for refs in self.refs_by_provider(repo)?.values() {
@@ -1361,6 +1421,17 @@ impl ProviderAccount {
}
}
fn default_branch(&self, repo: &str) -> Result<Option<String>> {
let value = self
.get_json::<Value>(&self.repo_api_url(repo))
.with_context(|| format!("failed to inspect {} default branch", self.site_name))?;
Ok(value
.get("default_branch")
.and_then(Value::as_str)
.filter(|branch| !branch.is_empty())
.map(ToOwned::to_owned))
}
fn wait_repo_present(&self, repo: &str) -> Result<()> {
retry("repo present", || {
if self.repo_exists(repo)? {
@@ -1951,6 +2022,26 @@ fn assert_output_success(output: Output, label: &str, redactor: &Redactor) -> Re
)
}
fn collect_backup_bundles(dir: &Path, repo: &str, output: &mut Vec<PathBuf>) -> Result<()> {
if !dir.exists() {
return Ok(());
}
for entry in fs::read_dir(dir).with_context(|| format!("failed to read {}", dir.display()))? {
let entry = entry?;
let path = entry.path();
if path.is_dir() {
collect_backup_bundles(&path, repo, output)?;
continue;
}
if path.extension().and_then(|value| value.to_str()) == Some("bundle")
&& path.to_string_lossy().contains(repo)
{
output.push(path);
}
}
Ok(())
}
fn retry(label: &str, mut action: impl FnMut() -> Result<()>) -> Result<()> {
let mut last_error = None;
for _ in 0..30 {
+30
View File
@@ -25,6 +25,7 @@ fn parses_value_tokens() {
repo_whitelist = "^important-|-mirror$"
repo_blacklist = "-archive$"
create_missing = true
delete_missing = false
visibility = "private"
conflict_resolution = "auto_rebase_pull_request"
@@ -57,6 +58,7 @@ fn parses_value_tokens() {
config.mirrors[0].repo_blacklist,
Some("-archive$".to_string())
);
assert!(!config.mirrors[0].delete_missing);
let webhook = config.webhook.unwrap();
assert!(webhook.install);
assert_eq!(webhook.url, "https://mirror.example.test/webhook");
@@ -92,6 +94,30 @@ fn config_defaults_jobs() {
assert_eq!(config.jobs, DEFAULT_JOBS);
}
#[test]
fn mirror_defaults_to_deleting_missing_repos_for_existing_configs() {
let config: Config = toml::from_str(
r#"
[[mirrors]]
name = "personal"
create_missing = true
[[mirrors.endpoints]]
site = "github"
kind = "user"
namespace = "alice"
[[mirrors.endpoints]]
site = "gitea"
kind = "user"
namespace = "alice"
"#,
)
.unwrap();
assert!(config.mirrors[0].delete_missing);
}
#[test]
fn validation_rejects_unknown_sites_and_single_endpoint_groups() {
let config = Config {
@@ -108,6 +134,7 @@ fn validation_rejects_unknown_sites_and_single_endpoint_groups() {
repo_whitelist: None,
repo_blacklist: None,
create_missing: true,
delete_missing: true,
visibility: Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
}],
@@ -137,6 +164,7 @@ fn validation_rejects_unknown_sites_and_single_endpoint_groups() {
repo_whitelist: None,
repo_blacklist: None,
create_missing: true,
delete_missing: true,
visibility: Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
}],
@@ -241,6 +269,7 @@ fn validation_rejects_duplicate_mirror_endpoints() {
repo_whitelist: None,
repo_blacklist: None,
create_missing: true,
delete_missing: true,
visibility: Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
}],
@@ -287,6 +316,7 @@ fn mirror_config() -> MirrorConfig {
repo_whitelist: None,
repo_blacklist: None,
create_missing: true,
delete_missing: true,
visibility: Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
}
+51
View File
@@ -41,6 +41,19 @@ fn detects_provider_disabled_repository_errors() {
assert!(!is_disabled_repository_error(&generic_forbidden));
}
#[test]
fn detects_missing_repository_errors() {
let error: anyhow::Error = GitCommandError::new(
"git ls-remote",
"",
"remote: Repository not found.\nfatal: repository 'https://github.com/alice/missing.git/' not found",
)
.into();
assert!(is_missing_repository_error(&error));
assert!(!is_disabled_repository_error(&error));
}
#[test]
fn ls_remote_snapshot_changes_when_remote_refs_change() {
let fixture = GitFixture::new();
@@ -275,6 +288,44 @@ fn delete_branches_removes_branch_from_target_remotes() {
assert!(!fixture.remote_ref_exists(&fixture.remote_b, "refs/heads/main"));
}
#[test]
fn backup_refs_create_restorable_bundle_before_branch_delete() {
let fixture = GitFixture::new();
let expected = fixture.commit("base", "base", 1_700_000_000);
fixture.push_head(&fixture.remote_a, "main");
fixture.push_head(&fixture.remote_b, "main");
let mirror = fixture.mirror();
fixture.fetch_all(&mirror);
let backup_ref = "refs/refray-backups/branches/main/test/a".to_string();
mirror
.backup_refs(&[RefBackup {
refname: backup_ref.clone(),
sha: expected.clone(),
description: "branch main before delete".to_string(),
}])
.unwrap();
let bundle = fixture._temp.path().join("branch-backup.bundle");
mirror
.create_bundle(&bundle, std::slice::from_ref(&backup_ref))
.unwrap();
mirror
.delete_branches(
&fixture.remotes(),
&[BranchDeletion {
branch: "main".to_string(),
deleted_remotes: vec!["a".to_string()],
target_remotes: vec!["b".to_string()],
}],
)
.unwrap();
let heads = git_output(None, ["bundle", "list-heads", bundle.to_str().unwrap()]);
assert!(heads.contains(&expected));
assert!(heads.contains(&backup_ref));
}
#[test]
fn tag_decisions_mirror_matching_or_missing_tags_and_skip_divergent_tags() {
let fixture = GitFixture::new();
+51 -1
View File
@@ -14,6 +14,8 @@ fn wizard_builds_sync_group_from_profile_urls() {
"",
"",
"",
"",
"",
"n",
"4",
]
@@ -46,6 +48,7 @@ fn wizard_builds_sync_group_from_profile_urls() {
assert_eq!(config.mirrors[0].endpoints[1].namespace, "azalea");
assert_eq!(config.mirrors[0].sync_visibility, SyncVisibility::All);
assert!(config.mirrors[0].create_missing);
assert!(config.mirrors[0].delete_missing);
assert_eq!(config.mirrors[0].visibility, Visibility::Private);
assert_eq!(
config.mirrors[0].conflict_resolution,
@@ -54,6 +57,9 @@ fn wizard_builds_sync_group_from_profile_urls() {
let output = String::from_utf8(output).unwrap();
assert!(output.contains("1. github.com/hykilpikonna <-> gitea.example.test/azalea"));
assert!(output.contains("Deletion backups: refray keeps a local backup"));
assert!(output.contains("Create repositories that are missing from an endpoint?"));
assert!(output.contains("delete it everywhere?"));
assert!(output.contains("Add another sync group"));
assert!(output.contains("Edit an existing group"));
assert!(output.contains("Delete an existing group"));
@@ -77,6 +83,8 @@ fn wizard_can_build_three_way_sync() {
"",
"",
"",
"",
"",
"n",
"4",
]
@@ -92,6 +100,35 @@ fn wizard_can_build_three_way_sync() {
assert_eq!(config.sites.len(), 3);
}
#[test]
fn wizard_can_disable_missing_repo_creation_and_repo_delete_propagation() {
let input = [
"https://github.com/alice",
"gh-token",
"",
"https://gitea.example.test/alice",
"gt-token",
"",
"n",
"",
"",
"n",
"n",
"",
"n",
"4",
]
.join("\n")
+ "\n";
let mut reader = Cursor::new(input.as_bytes());
let mut output = Vec::new();
let config = run_config_wizard_with_io(Config::default(), &mut reader, &mut output).unwrap();
assert!(!config.mirrors[0].create_missing);
assert!(!config.mirrors[0].delete_missing);
}
#[test]
fn wizard_can_enable_webhooks() {
let input = [
@@ -105,6 +142,8 @@ fn wizard_can_enable_webhooks() {
"",
"",
"",
"",
"",
"y",
"https://mirror.example.test/webhook",
"y",
@@ -159,6 +198,8 @@ fn wizard_reuses_existing_credentials_for_same_instance() {
"",
"",
"",
"",
"",
"n",
"4",
]
@@ -214,6 +255,7 @@ fn wizard_starts_existing_config_at_sync_group_menu() {
repo_whitelist: None,
repo_blacklist: None,
create_missing: true,
delete_missing: true,
visibility: Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
}],
@@ -243,6 +285,7 @@ fn wizard_can_ask_to_run_full_sync_after_config() {
repo_whitelist: None,
repo_blacklist: None,
create_missing: true,
delete_missing: true,
visibility: Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
}],
@@ -319,6 +362,7 @@ fn wizard_edits_existing_sync_group_from_menu() {
repo_whitelist: Some("^important-".to_string()),
repo_blacklist: Some("-archive$".to_string()),
create_missing: false,
delete_missing: true,
visibility: Visibility::Public,
conflict_resolution: ConflictResolutionStrategy::Fail,
}],
@@ -337,6 +381,8 @@ fn wizard_edits_existing_sync_group_from_menu() {
"^public-",
"-skip$",
"",
"",
"",
"n",
"4",
]
@@ -356,6 +402,7 @@ fn wizard_edits_existing_sync_group_from_menu() {
assert_eq!(mirror.endpoints[1].site, "gitlab");
assert_eq!(mirror.endpoints[1].namespace, "bob");
assert!(!mirror.create_missing);
assert!(mirror.delete_missing);
assert_eq!(mirror.sync_visibility, SyncVisibility::Public);
assert_eq!(mirror.repo_whitelist, Some("^public-".to_string()));
assert_eq!(mirror.repo_blacklist, Some("-skip$".to_string()));
@@ -406,12 +453,13 @@ fn wizard_prefills_existing_sync_group_when_editing() {
repo_whitelist: None,
repo_blacklist: None,
create_missing: true,
delete_missing: true,
visibility: Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
}],
webhook: None,
};
let input = ["2", "1", "", "", "", "", "n", "", "", "", "n", "4"].join("\n") + "\n";
let input = ["2", "1", "", "", "", "", "n", "", "", "", "", "", "n", "4"].join("\n") + "\n";
let mut reader = Cursor::new(input.as_bytes());
let mut output = Vec::new();
@@ -470,6 +518,7 @@ fn wizard_deletes_existing_sync_group_from_menu() {
repo_whitelist: None,
repo_blacklist: None,
create_missing: true,
delete_missing: true,
visibility: Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
}],
@@ -529,6 +578,7 @@ fn wizard_can_go_back_from_delete_menu() {
repo_whitelist: None,
repo_blacklist: None,
create_missing: true,
delete_missing: true,
visibility: Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
}],
+59 -1
View File
@@ -65,6 +65,9 @@ where
let endpoints = prompt_sync_group_endpoints(reader, writer, config, &[])?;
let sync_visibility = prompt_sync_visibility(reader, writer, None)?;
let repo_filters = prompt_repo_filters(reader, writer, None)?;
write_deletion_backup_notice(writer)?;
let create_missing = prompt_create_missing(reader, writer, None)?;
let delete_missing = prompt_delete_missing(reader, writer, None)?;
let conflict_resolution = prompt_conflict_resolution(reader, writer, None)?;
config.upsert_mirror(MirrorConfig {
name: next_mirror_name(config),
@@ -72,7 +75,8 @@ where
sync_visibility,
repo_whitelist: repo_filters.whitelist,
repo_blacklist: repo_filters.blacklist,
create_missing: true,
create_missing,
delete_missing,
visibility: Visibility::Private,
conflict_resolution,
});
@@ -276,6 +280,8 @@ where
whitelist: config.mirrors[index - 1].repo_whitelist.clone(),
blacklist: config.mirrors[index - 1].repo_blacklist.clone(),
};
let existing_create_missing = config.mirrors[index - 1].create_missing;
let existing_delete_missing = config.mirrors[index - 1].delete_missing;
let existing_conflict_resolution =
config.mirrors[index - 1].conflict_resolution.clone();
let endpoints = prompt_sync_group_endpoints(reader, writer, config, &existing)?;
@@ -283,6 +289,11 @@ where
prompt_sync_visibility(reader, writer, Some(&existing_sync_visibility))?;
let repo_filters =
prompt_repo_filters(reader, writer, Some(&existing_repo_filters))?;
write_deletion_backup_notice(writer)?;
let create_missing =
prompt_create_missing(reader, writer, Some(existing_create_missing))?;
let delete_missing =
prompt_delete_missing(reader, writer, Some(existing_delete_missing))?;
let conflict_resolution = prompt_conflict_resolution(
reader,
writer,
@@ -292,6 +303,8 @@ where
config.mirrors[index - 1].sync_visibility = sync_visibility;
config.mirrors[index - 1].repo_whitelist = repo_filters.whitelist;
config.mirrors[index - 1].repo_blacklist = repo_filters.blacklist;
config.mirrors[index - 1].create_missing = create_missing;
config.mirrors[index - 1].delete_missing = delete_missing;
config.mirrors[index - 1].conflict_resolution = conflict_resolution;
prompt_webhook_setup(reader, writer, config)?;
writeln!(writer, "updated sync group {index}")?;
@@ -572,6 +585,51 @@ where
Ok(parse_repo_pattern(&value))
}
fn write_deletion_backup_notice<W>(writer: &mut W) -> Result<()>
where
W: Write,
{
writeln!(
writer,
"Deletion backups: refray keeps a local backup before propagating repository or branch deletes."
)?;
Ok(())
}
fn prompt_create_missing<R, W>(
reader: &mut R,
writer: &mut W,
existing: Option<bool>,
) -> Result<bool>
where
R: BufRead,
W: Write,
{
prompt_bool(
reader,
writer,
"Create repositories that are missing from an endpoint?",
existing.unwrap_or(true),
)
}
fn prompt_delete_missing<R, W>(
reader: &mut R,
writer: &mut W,
existing: Option<bool>,
) -> Result<bool>
where
R: BufRead,
W: Write,
{
prompt_bool(
reader,
writer,
"When a previously synced repository is deleted from one endpoint, delete it everywhere?",
existing.unwrap_or(true),
)
}
fn sync_visibility_value(sync_visibility: &SyncVisibility) -> &'static str {
match sync_visibility {
SyncVisibility::All => "all",
+208 -1
View File
@@ -225,6 +225,42 @@ fn list_gitlab_user_repos_merges_authenticated_owned_projects() {
handle.join().unwrap();
}
#[test]
fn list_gitlab_group_repos_ignores_deletion_scheduled_projects() {
let projects = r#"[
{"name":"active","path":"active","path_with_namespace":"maigolabs/active","http_url_to_repo":"https://gitlab.example.test/maigolabs/active.git","visibility":"private","description":null,"namespace":{"path":"maigolabs","full_path":"maigolabs"}},
{"name":"Kairos-deletion_scheduled-82068172","path":"Kairos-deletion_scheduled-82068172","path_with_namespace":"maigolabs/Kairos-deletion_scheduled-82068172","http_url_to_repo":"https://gitlab.example.test/maigolabs/Kairos-deletion_scheduled-82068172.git","visibility":"private","description":null,"namespace":{"path":"maigolabs","full_path":"maigolabs"}},
{"name":"marked-at","path":"marked-at","path_with_namespace":"maigolabs/marked-at","http_url_to_repo":"https://gitlab.example.test/maigolabs/marked-at.git","visibility":"private","description":null,"namespace":{"path":"maigolabs","full_path":"maigolabs"},"marked_for_deletion_at":"2026-05-17"},
{"name":"marked-on","path":"marked-on","path_with_namespace":"maigolabs/marked-on","http_url_to_repo":"https://gitlab.example.test/maigolabs/marked-on.git","visibility":"private","description":null,"namespace":{"path":"maigolabs","full_path":"maigolabs"},"marked_for_deletion_on":"2026-05-17"},
{"name":"pending","path":"pending","path_with_namespace":"maigolabs/pending","http_url_to_repo":"https://gitlab.example.test/maigolabs/pending.git","visibility":"private","description":null,"namespace":{"path":"maigolabs","full_path":"maigolabs"},"pending_delete":true}
]"#;
let (api_url, handle) = one_request_server("200 OK", projects, |request| {
assert!(
request.starts_with(
"GET /groups/maigolabs/projects?simple=true&include_subgroups=false&per_page=100 "
),
"request was {request}"
);
});
let site = SiteConfig {
api_url: Some(api_url),
..site(ProviderKind::Gitlab, None)
};
let repos = ProviderClient::new(&site)
.unwrap()
.list_repos(&EndpointConfig {
site: "gitlab".to_string(),
kind: NamespaceKind::Group,
namespace: "maigolabs".to_string(),
})
.unwrap();
assert_eq!(repos.len(), 1);
assert_eq!(repos[0].name, "active");
handle.join().unwrap();
}
#[test]
fn create_gitlab_repo_returns_existing_repo_when_path_is_taken() {
let existing = r#"{"name":"repo","path":"repo","path_with_namespace":"alice/repo","http_url_to_repo":"https://gitlab.example.test/alice/repo.git","visibility":"public","description":"existing","namespace":{"path":"alice","full_path":"alice"}}"#;
@@ -282,6 +318,82 @@ fn create_gitlab_repo_returns_existing_repo_when_path_is_taken() {
handle.join().unwrap();
}
#[test]
fn set_github_default_branch_patches_repo() {
let (api_url, handle) = one_request_server("200 OK", "{}", |request| {
assert!(
request.starts_with("PATCH /repos/alice/repo "),
"request was {request}"
);
assert!(
request.contains(r#""default_branch":"main""#),
"request was {request}"
);
assert!(
request
.to_ascii_lowercase()
.contains("authorization: bearer secret"),
"request was {request}"
);
});
let site = SiteConfig {
api_url: Some(api_url),
..site(ProviderKind::Github, None)
};
ProviderClient::new(&site)
.unwrap()
.set_default_branch(
&EndpointConfig {
site: "github".to_string(),
kind: NamespaceKind::User,
namespace: "alice".to_string(),
},
"repo",
"main",
)
.unwrap();
handle.join().unwrap();
}
#[test]
fn set_gitlab_default_branch_updates_project() {
let (api_url, handle) = one_request_server("200 OK", "{}", |request| {
assert!(
request.starts_with("PUT /projects/alice%2Frepo "),
"request was {request}"
);
assert!(
request.contains(r#""default_branch":"main""#),
"request was {request}"
);
assert!(
request
.to_ascii_lowercase()
.contains("private-token: secret"),
"request was {request}"
);
});
let site = SiteConfig {
api_url: Some(api_url),
..site(ProviderKind::Gitlab, None)
};
ProviderClient::new(&site)
.unwrap()
.set_default_branch(
&EndpointConfig {
site: "gitlab".to_string(),
kind: NamespaceKind::User,
namespace: "alice".to_string(),
},
"repo",
"main",
)
.unwrap();
handle.join().unwrap();
}
#[test]
fn install_webhook_posts_github_hook_when_missing() {
let (api_url, handle) = request_server(
@@ -309,7 +421,7 @@ fn install_webhook_posts_github_hook_when_missing() {
};
let client = ProviderClient::new(&site).unwrap();
client
let outcome = client
.install_webhook(
&EndpointConfig {
site: "github".to_string(),
@@ -326,6 +438,63 @@ fn install_webhook_posts_github_hook_when_missing() {
"secret",
)
.unwrap();
assert_eq!(outcome, WebhookInstallOutcome::Created);
handle.join().unwrap();
}
#[test]
fn install_webhook_reports_existing_forgejo_hook() {
let (api_url, handle) = request_server(
vec![
(
"200 OK",
r#"[{"id":42,"config":{"url":"https://mirror.example.test/webhook/"}}]"#,
),
("200 OK", r#"{"id":42}"#),
],
|index, request| match index {
0 => assert!(
request.starts_with("GET /repos/alice/repo/hooks "),
"request was {request}"
),
1 => {
assert!(
request.starts_with("PATCH /repos/alice/repo/hooks/42 "),
"request was {request}"
);
assert!(request.contains("https://mirror.example.test/webhook"));
assert!(request.contains("secret"));
assert!(request.contains("push"));
}
_ => unreachable!(),
},
);
let site = SiteConfig {
api_url: Some(api_url),
..site(ProviderKind::Forgejo, None)
};
let client = ProviderClient::new(&site).unwrap();
let outcome = client
.install_webhook(
&EndpointConfig {
site: "forgejo".to_string(),
kind: NamespaceKind::User,
namespace: "alice".to_string(),
},
&RemoteRepo {
name: "repo".to_string(),
clone_url: "https://codeberg.org/alice/repo.git".to_string(),
private: true,
description: None,
},
"https://mirror.example.test/webhook",
"secret",
)
.unwrap();
assert_eq!(outcome, WebhookInstallOutcome::Existing);
handle.join().unwrap();
}
@@ -653,6 +822,44 @@ fn create_gitea_repo_returns_existing_repo_on_conflict() {
handle.join().unwrap();
}
#[test]
fn set_gitea_default_branch_patches_repo() {
let (api_url, handle) = one_request_server("200 OK", "{}", |request| {
assert!(
request.starts_with("PATCH /repos/alice/repo "),
"request was {request}"
);
assert!(
request.contains(r#""default_branch":"main""#),
"request was {request}"
);
assert!(
request
.to_ascii_lowercase()
.contains("authorization: token secret"),
"request was {request}"
);
});
let site = SiteConfig {
api_url: Some(api_url),
..site(ProviderKind::Gitea, None)
};
ProviderClient::new(&site)
.unwrap()
.set_default_branch(
&EndpointConfig {
site: "gitea".to_string(),
kind: NamespaceKind::User,
namespace: "alice".to_string(),
},
"repo",
"main",
)
.unwrap();
handle.join().unwrap();
}
#[test]
fn open_pull_request_posts_github_pull_when_missing() {
let (api_url, handle) = request_server(
+126
View File
@@ -174,6 +174,29 @@ fn branch_deletion_decisions_ignore_internal_conflict_branches() {
assert!(blocked.is_empty());
}
#[test]
fn branches_deleted_everywhere_are_backed_up_before_prune() {
let mut previous = BTreeMap::new();
previous.insert(
"github".to_string(),
remote_ref_state("a", &[("main", "111")]),
);
previous.insert(
"gitea".to_string(),
remote_ref_state("b", &[("main", "111")]),
);
let backups = branches_deleted_everywhere_backups(&previous, &BTreeMap::new(), "stamp");
assert_eq!(backups.len(), 1);
assert_eq!(backups[0].sha, "111");
assert!(
backups[0]
.refname
.starts_with("refs/refray-backups/branches/")
);
}
#[test]
fn repo_deletion_decision_propagates_previous_synced_repo_deletion() {
let mirror = test_mirror();
@@ -208,6 +231,35 @@ fn repo_deletion_decision_propagates_previous_synced_repo_deletion() {
);
}
#[test]
fn repo_deletion_decision_is_disabled_by_mirror_policy() {
let mut mirror = test_mirror();
mirror.delete_missing = false;
let mut previous = BTreeMap::new();
previous.insert(
remote_key("github"),
remote_ref_state("a", &[("main", "111")]),
);
previous.insert(
remote_key("gitea"),
remote_ref_state("b", &[("main", "111")]),
);
let mut current = BTreeMap::new();
current.insert(
remote_key("gitea"),
remote_ref_state("b", &[("main", "111")]),
);
let decision = repo_deletion_decision(
&mirror,
&[endpoint_repo("gitea")],
Some(&previous),
&current,
);
assert_eq!(decision, RepoDeletionDecision::None);
}
#[test]
fn repo_deletion_decision_conflicts_when_remaining_repo_changed() {
let mirror = test_mirror();
@@ -396,6 +448,46 @@ fn endpoint_remote_names_do_not_slug_collide() {
);
}
#[test]
fn targeted_endpoint_repos_synthesize_clone_urls_without_listing() {
let mirror = MirrorConfig {
name: "sync-1".to_string(),
endpoints: vec![EndpointConfig {
site: "gitlab".to_string(),
kind: crate::config::NamespaceKind::Group,
namespace: "parent/child".to_string(),
}],
sync_visibility: crate::config::SyncVisibility::All,
repo_whitelist: None,
repo_blacklist: None,
create_missing: true,
delete_missing: true,
visibility: crate::config::Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
};
let config = Config {
jobs: crate::config::DEFAULT_JOBS,
sites: vec![crate::config::SiteConfig {
name: "gitlab".to_string(),
provider: crate::config::ProviderKind::Gitlab,
base_url: "https://gitlab.example.test/root".to_string(),
api_url: None,
token: crate::config::TokenConfig::Value("token".to_string()),
git_username: None,
}],
mirrors: vec![mirror.clone()],
webhook: None,
};
let repos = targeted_endpoint_repos(&config, &mirror, "repo").unwrap();
assert_eq!(repos.len(), 1);
assert_eq!(
repos[0].repo.clone_url,
"https://gitlab.example.test/root/parent/child/repo.git"
);
}
#[test]
fn created_repo_visibility_follows_existing_public_repo() {
let mirror = test_mirror();
@@ -440,6 +532,39 @@ fn created_repo_visibility_falls_back_to_config_without_template() {
);
}
#[test]
fn gitlab_invalid_project_name_errors_are_skippable() {
let error = anyhow::Error::msg(
r#"POST https://gitlab.com/api/v4/projects returned 400 Bad Request: {"message":{"project_namespace.path":["can only include non-accented letters, digits, '_', '-' and '.'. It must not start with '-', '_', or '.'."],"name":["can contain only letters, digits, emoji, '_', '.', '+', dashes, or spaces. It must start with a letter, digit, emoji, or '_'."]}}"#,
);
assert!(is_gitlab_invalid_project_name_error(&error));
}
#[test]
fn gitlab_project_path_validation_matches_create_constraints() {
for name in ["Kairos", "needLe", "amaoke.app", "repo_1", "repo-1"] {
assert!(is_supported_gitlab_project_path(name), "{name}");
}
for name in [
"",
".github",
"_private",
"-draft",
"repo.",
"repo_",
"repo-",
"repo.git",
"repo.atom",
"has space",
"has+plus",
"荞麦main",
] {
assert!(!is_supported_gitlab_project_path(name), "{name}");
}
}
fn remote_ref_state(hash: &str, branches: &[(&str, &str)]) -> RemoteRefState {
RemoteRefState {
hash: hash.to_string(),
@@ -475,6 +600,7 @@ fn test_mirror() -> MirrorConfig {
repo_whitelist: None,
repo_blacklist: None,
create_missing: true,
delete_missing: true,
visibility: crate::config::Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
}
+4
View File
@@ -115,6 +115,7 @@ fn matches_jobs_by_provider_and_namespace() {
repo_whitelist: None,
repo_blacklist: None,
create_missing: true,
delete_missing: true,
visibility: Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
}],
@@ -142,6 +143,7 @@ fn matching_jobs_respects_repo_name_filters() {
repo_whitelist: Some("^important-".to_string()),
repo_blacklist: Some("-archive$".to_string()),
create_missing: true,
delete_missing: true,
visibility: Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
};
@@ -360,6 +362,7 @@ fn uninstall_webhooks_skips_blocked_provider_access() {
repo_whitelist: None,
repo_blacklist: None,
create_missing: true,
delete_missing: true,
visibility: Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
}],
@@ -713,6 +716,7 @@ fn filtered_mirror() -> MirrorConfig {
repo_whitelist: Some("^important-".to_string()),
repo_blacklist: Some("-archive$".to_string()),
create_missing: true,
delete_missing: true,
visibility: Visibility::Private,
conflict_resolution: ConflictResolutionStrategy::Fail,
}