Skip to content

Commit

Permalink
(feat) update to 0.9.3b0 (#944)
Browse files Browse the repository at this point in the history
  • Loading branch information
w-gc authored Dec 19, 2024
1 parent 41482e5 commit a1411c5
Show file tree
Hide file tree
Showing 346 changed files with 8,145 additions and 3,125 deletions.
9 changes: 5 additions & 4 deletions .bazelrc
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ build --copt=-fstack-protector-strong
build:linux --copt=-Wl,-z,noexecstack
build:macos --copt=-Wa,--noexecstack

test --keep_going
build --keep_going
test --test_output=errors

build:benchmark --copt -O3
Expand All @@ -55,11 +55,12 @@ build:linux --action_env=BAZEL_LINKLIBS=-l%:libstdc++.a:-l%:libgcc.a

# platform specific config
# Bazel will automatic pick platform config since we have enable_platform_specific_config set
build:macos --copt="-Xpreprocessor -fopenmp"
build:macos --copt=-Xclang=-fopenmp
build:macos --copt=-Wno-unused-command-line-argument
build:macos --features=-supports_dynamic_linker
build:macos --macos_minimum_os=12.0
build:macos --host_macos_minimum_os=12.0
build:macos --macos_minimum_os=13.0
build:macos --host_macos_minimum_os=13.0
build:macos --action_env MACOSX_DEPLOYMENT_TARGET=13.0

build:linux --copt=-fopenmp
build:linux --linkopt=-fopenmp
4 changes: 2 additions & 2 deletions .circleci/asan-config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ version: 2.1
parameters:
run-asan:
type: boolean
default: false
default: true

# Define a job to be invoked later in a workflow.
# See: https://circleci.com/docs/2.0/configuration-reference/#jobs
Expand Down Expand Up @@ -55,7 +55,7 @@ jobs:
command: |
set +e
declare -i test_status
bazel test //libspu/... --features=asan --ui_event_filters=-info,-debug,-warning --test_output=errors | tee test_result.log; test_status=${PIPESTATUS[0]}
bazel test //libspu/... --features=asan --test_timeout=500 --ui_event_filters=-info,-debug,-warning --test_output=errors | tee test_result.log; test_status=${PIPESTATUS[0]}
sh ../devtools/rename-junit-xml.sh
find bazel-testlogs/ -type f -name "test.log" -print0 | xargs -0 tar -cvzf test_logs.tar.gz
Expand Down
2 changes: 1 addition & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ version: 2.1
setup: true

orbs:
path-filtering: circleci/path-filtering@1.0.0
path-filtering: circleci/path-filtering@1.1.0
continuation: circleci/[email protected]

parameters:
Expand Down
4 changes: 2 additions & 2 deletions .circleci/continue-config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ commands:
../devtools/bazel_cache_setup.py --in_file=../gcs.data --out_file=../gcs.json --min_download
- run:
name: "build"
command: bazel build <<parameters.targets>> -c opt --ui_event_filters=-info,-debug,-warning --jobs 20
command: bazel build <<parameters.targets>> -c opt --ui_event_filters=-info,-debug,-warning
- run:
name: "test"
command: |
Expand Down Expand Up @@ -120,7 +120,7 @@ jobs:
extra_bazel_test_args: --test_env LD_LIBRARY_PATH=/root/miniconda3/lib/
macOS_ut:
macos:
xcode: 15.4.0
xcode: 16.0.0
resource_class: macos.m1.large.gen1
steps:
- checkout
Expand Down
2 changes: 1 addition & 1 deletion .circleci/release-config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ commands:
jobs:
macOS_publish:
macos:
xcode: 15.4.0
xcode: 16.0.0
resource_class: macos.m1.large.gen1
parameters:
python_ver:
Expand Down
3 changes: 2 additions & 1 deletion .clang-tidy
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,8 @@ Checks: "abseil-cleanup-ctad,
-readability-identifier-length,
-readability-function-cognitive-complexity,
-readability-magic-numbers,
-readability-named-parameter"
-readability-named-parameter,
-readability-convert-member-functions-to-static"

CheckOptions:
- key: bugprone-argument-comment.StrictMode
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/scorecard.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,12 +32,12 @@ jobs:

steps:
- name: "Checkout code"
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false

- name: "Run analysis"
uses: ossf/scorecard-action@dc50aa9510b46c811795eb24b2f1ba02a914e534 # v2.3.3
uses: ossf/scorecard-action@62b2cac7ed8198b15735ed49ab1e5cf35480ba46 # v2.4.0
with:
results_file: results.sarif
results_format: sarif
Expand Down Expand Up @@ -67,6 +67,6 @@ jobs:

# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@4fa2a7953630fd2f3fb380f21be14ede0169dd4f # v3.25.12
uses: github/codeql-action/upload-sarif@df409f7d9260372bd5f19e5b04e83cb3c43714ae # v3.27.9
with:
sarif_file: results.sarif
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,12 @@
>
> please add your unreleased change here.
## 20241219

- [SPU] 0.9.3b0 release
- [Improvement] Optimize exponential computation for semi2k (**experimental**)
- [Feature] Add more send/recv actions profiling

## 20240716

- [SPU] 0.9.2b0 release
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ python3 -m pip install -r requirements-dev.txt
#### macOS

```sh
# macOS >= 12.0, Xcode >= 14.0
# macOS >= 13.0, Xcode >= 15.0

# Install Xcode
https://apps.apple.com/us/app/xcode/id497799835?mt=12
Expand Down
24 changes: 23 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,9 @@ Please follow [Installation Guidelines](INSTALLATION.md) to install SPU.

## Citing SPU

If you think SPU is helpful for your research or development, please consider citing our [paper](https://www.usenix.org/conference/atc23/presentation/ma):
If you think SPU is helpful for your research or development, please consider citing our papers:

[USENIX ATC'23](https://www.usenix.org/conference/atc23/presentation/ma)

```text
@inproceedings {spu,
Expand All @@ -69,6 +71,26 @@ If you think SPU is helpful for your research or development, please consider ci
}
```

[ICML'24](https://proceedings.mlr.press/v235/wu24d.html)

```text
@inproceedings{ditto,
title = {Ditto: Quantization-aware Secure Inference of Transformers upon {MPC}},
author = {Wu, Haoqi and Fang, Wenjing and Zheng, Yancheng and Ma, Junming and Tan, Jin and Wang, Lei},
booktitle = {Proceedings of the 41st International Conference on Machine Learning},
pages = {53346--53365},
year = {2024},
editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix},
volume = {235},
series = {Proceedings of Machine Learning Research},
month = {21--27 Jul},
publisher = {PMLR},
pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/wu24d/wu24d.pdf},
url = {https://proceedings.mlr.press/v235/wu24d.html},
abstract = {Due to the rising privacy concerns on sensitive client data and trained models like Transformers, secure multi-party computation (MPC) techniques are employed to enable secure inference despite attendant overhead. Existing works attempt to reduce the overhead using more MPC-friendly non-linear function approximations. However, the integration of quantization widely used in plaintext inference into the MPC domain remains unclear. To bridge this gap, we propose the framework named Ditto to enable more efficient quantization-aware secure Transformer inference. Concretely, we first incorporate an MPC-friendly quantization into Transformer inference and employ a quantization-aware distillation procedure to maintain the model utility. Then, we propose novel MPC primitives to support the type conversions that are essential in quantization and implement the quantization-aware MPC execution of secure quantized inference. This approach significantly decreases both computation and communication overhead, leading to improvements in overall efficiency. We conduct extensive experiments on Bert and GPT2 models to evaluate the performance of Ditto. The results demonstrate that Ditto is about $3.14\sim 4.40\times$ faster than MPCFormer (ICLR 2023) and $1.44\sim 2.35\times$ faster than the state-of-the-art work PUMA with negligible utility degradation.}
}
```

## Acknowledgement

We thank the significant contributions made by [Alibaba Gemini Lab](https://alibaba-gemini-lab.github.io) and security advisories made by [VUL337@NISL@THU](https://netsec.ccert.edu.cn/vul337).
4 changes: 4 additions & 0 deletions WORKSPACE
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,10 @@ rules_foreign_cc_dependencies(
register_preinstalled_tools = True,
)

load("@bazel_features//:deps.bzl", "bazel_features_deps")

bazel_features_deps()

load("@rules_cuda//cuda:repositories.bzl", "register_detected_cuda_toolchains", "rules_cuda_dependencies")

rules_cuda_dependencies()
Expand Down
2 changes: 2 additions & 0 deletions bazel/eigen.BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
# matrices, and related algorithms.

load("@rules_cc//cc:defs.bzl", "cc_library")
load("@yacl//bazel:yacl.bzl", "OMP_DEPS")

licenses([
# Note: Eigen is an MPL2 library that includes GPL v3 and LGPL v2.1+ code.
Expand Down Expand Up @@ -64,6 +65,7 @@ cc_library(
],
includes = ["."],
visibility = ["//visibility:public"],
deps = OMP_DEPS,
)

filegroup(
Expand Down
Loading

0 comments on commit a1411c5

Please sign in to comment.