Преглед изворни кода

docs: split CUDA build paths by platform

master
Jan Svabenik пре 2 дана
родитељ
комит
0f80148dd0
5 измењених фајлова са 95 додато и 3 уклоњено
  1. +18
    -0
      build-cuda-linux.sh
  2. +20
    -0
      build-cuda-windows.ps1
  3. +9
    -2
      build-sdrplay.ps1
  4. +46
    -0
      docs/build-cuda.md
  5. +2
    -1
      internal/demod/gpudemod/README.md

+ 18
- 0
build-cuda-linux.sh Прегледај датотеку

@@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -euo pipefail

CUDA_ROOT="${CUDA_ROOT:-/usr/local/cuda}"
SRC="internal/demod/gpudemod/kernels.cu"
OUT_DIR="internal/demod/gpudemod/build"
OUT_OBJ="$OUT_DIR/kernels.o"

mkdir -p "$OUT_DIR"

if [[ ! -x "$CUDA_ROOT/bin/nvcc" ]]; then
echo "nvcc not found at $CUDA_ROOT/bin/nvcc" >&2
exit 1
fi

echo "Building CUDA kernel artifacts for Linux..."
"$CUDA_ROOT/bin/nvcc" -c "$SRC" -o "$OUT_OBJ" -I "$CUDA_ROOT/include"
echo "Built: $OUT_OBJ"

+ 20
- 0
build-cuda-windows.ps1 Прегледај датотеку

@@ -0,0 +1,20 @@
$ErrorActionPreference = 'Stop'

$msvcCl = 'C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64'
if (-not (Test-Path (Join-Path $msvcCl 'cl.exe'))) {
throw "cl.exe not found at $msvcCl"
}

$cudaBin = 'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.2\bin'
if (-not (Test-Path (Join-Path $cudaBin 'nvcc.exe'))) {
throw "nvcc.exe not found at $cudaBin"
}

$env:PATH = "$msvcCl;$cudaBin;" + $env:PATH

Write-Host "Building CUDA kernel artifacts for Windows..." -ForegroundColor Cyan
powershell -ExecutionPolicy Bypass -File tools\build-gpudemod-kernel.ps1
if ($LASTEXITCODE -ne 0) { throw "kernel build failed" }

Write-Host "Done. Kernel artifacts prepared." -ForegroundColor Green
Write-Host "Note: final full-app linking may still require an MSVC-compatible CGO/link strategy, not the current MinGW flow." -ForegroundColor Yellow

+ 9
- 2
build-sdrplay.ps1 Прегледај датотеку

@@ -3,7 +3,11 @@ $gcc = 'C:\msys64\mingw64\bin'
if (-not (Test-Path (Join-Path $gcc 'gcc.exe'))) {
throw "gcc not found at $gcc"
}
$env:PATH = "$gcc;" + $env:PATH
$msvcCl = 'C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64'
if (-not (Test-Path (Join-Path $msvcCl 'cl.exe'))) {
throw "cl.exe not found at $msvcCl"
}
$env:PATH = "$gcc;$msvcCl;" + $env:PATH
$env:CGO_ENABLED = '1'

# SDRplay
@@ -38,13 +42,16 @@ if (Test-Path $cudaMingw) {
}

Write-Host "Building with SDRplay + cuFFT support..." -ForegroundColor Cyan
Write-Host "WARNING: this path still performs final Go linking through MinGW GCC." -ForegroundColor Yellow
Write-Host "If CUDA kernel artifacts are MSVC-built, final link may fail due to mixed toolchains." -ForegroundColor Yellow

$gccHost = Join-Path $gcc 'g++.exe'
if (!(Test-Path $gccHost)) {
throw "g++.exe not found at $gccHost"
}

powershell -ExecutionPolicy Bypass -File tools\build-gpudemod-kernel.ps1 -HostCompiler $gccHost
# Kernel build currently relies on nvcc + MSVC host compiler availability.
powershell -ExecutionPolicy Bypass -File tools\build-gpudemod-kernel.ps1
if ($LASTEXITCODE -ne 0) { throw "kernel build failed" }

go build -tags "sdrplay,cufft" ./cmd/sdrd


+ 46
- 0
docs/build-cuda.md Прегледај датотеку

@@ -0,0 +1,46 @@
# CUDA Build Strategy

## Problem statement

The repository currently mixes two Windows toolchain worlds:

- Go/CGO final link often goes through MinGW GCC/LD
- CUDA kernel compilation via `nvcc` on Windows prefers MSVC (`cl.exe`)

This works for isolated package tests, but full application builds can fail when an MSVC-built CUDA library is linked by MinGW, producing unresolved symbols such as:

- `__GSHandlerCheck`
- `__security_cookie`
- `_Init_thread_epoch`

## Recommended split

### Windows

Use an explicitly Windows-oriented build path:

1. Prepare CUDA kernel artifacts with `nvcc`
2. Keep the resulting CUDA linkage path clearly separated from MinGW-based fallback builds
3. Do not assume that a MinGW-linked Go binary can always consume MSVC-built CUDA archives

### Linux

Prefer a GCC/NVCC-oriented build path:

1. Build CUDA kernels with `nvcc` + GCC
2. Link through the normal Linux CGO flow
3. Avoid Windows-specific import-lib and MSVC runtime assumptions entirely

## Repository design guidance

- Keep `internal/demod/gpudemod/` platform-neutral at the Go API level
- Keep CUDA kernels in `kernels.cu`
- Use OS-specific build scripts for orchestration
- Avoid embedding Windows-only build assumptions into shared Go code when possible

## Current practical status

- `go test ./...` passes
- `go test -tags cufft ./internal/demod/gpudemod` passes with NVCC/MSVC setup
- `build-sdrplay.ps1` has progressed past the original invalid `#cgo LDFLAGS` issue
- Remaining Windows blocker is now a toolchain mismatch between MSVC-built CUDA artifacts and MinGW final linking

+ 2
- 1
internal/demod/gpudemod/README.md Прегледај датотеку

@@ -8,6 +8,7 @@ Phase 1 CUDA demod scaffolding.
- `cufft` builds allocate GPU buffers and cross the CGO/CUDA launch boundary.
- If CUDA launch wrappers are not backed by compiled kernels yet, the code falls back to CPU DSP.
- The shifted IQ path is already wired so a successful GPU freq-shift result can be copied back and reused immediately.
- Build orchestration should now be considered OS-specific; see `docs/build-cuda.md`.

## First real kernel

@@ -22,7 +23,7 @@ On a CUDA-capable dev machine with toolchain installed:

1. Compile `kernels.cu` into an object file and archive it into a linkable library
- helper script: `tools/build-gpudemod-kernel.ps1`
2. For MinGW/CGO builds, prefer building the archive with MinGW host compiler + `ar.exe`
2. On Jan's Windows machine, the working kernel-build path currently relies on `nvcc` + MSVC `cl.exe` in PATH
3. Link `gpudemod_kernels.lib` into the `cufft` build
3. Replace `gpud_launch_freq_shift(...)` stub body with the real kernel launch
4. Validate copied-back shifted IQ against `dsp.FreqShift`


Loading…
Откажи
Сачувај