ICPC 2020 - N. What’s Our Vector, Victor?
We are given n known vectors p_1,,p_n R^d and the Euclidean distances from an unknown vector x to each of them. We must output any vector x consistent with all those distances. The hidden test data is generated from a...
Source-first archive entry
This page is built from the copied files in competitive_programming/icpc/2020/N-whats-our-vector-victor. Edit
competitive_programming/icpc/2020/N-whats-our-vector-victor/solution.tex to update the written solution and
competitive_programming/icpc/2020/N-whats-our-vector-victor/solution.cpp to update the implementation.
The website does not replace those files with hand-maintained HTML. It reads the copied source tree during the build and exposes the exact files below.
Problem Statement
Copied statement text kept beside the solution archive for direct reference.
Problem N
What’s Our Vector, Victor?
Time limit: 6 seconds
Vector embeddings are a common tool in machine learning systems. Complex
real-world concepts (for instance, words in a dictionary) are mapped onto vec-
tors of real numbers. If embeddings are trained properly, related concepts remain
close together in the vector space, so questions like “are these two words syn-
onyms?” can be reduced to straightforward mathematical tests.
You have been hired by Ye Olde Ice Cream Shoppe to create an embedding model
for flavors, so that when they are out of an ice cream flavor they can recommend
related flavors to customers. After training your embedding on six cloud datacen-
ters for four months, you finally had the perfect flavor model! You were ready
Image from pxfuel
to revolutionize the ice cream industry on your street and, dare we say it, your
neighborhood! Well, until you accidentally dripped some free ice cream on your
keyboard and deleted half the data. The Shoppe cannot afford the 30 billion rubles needed to retrain the
model, so you are in trouble.
Fortunately, you still have various training results lying around. For a given deleted vector, the training
data tells you how close it was to some known flavor vectors. The closeness of vectors A and B is
just the standard Euclidean distance metric (that is, the length of the vector A − B). Write a tool that
reconstructs embeddings which are consistent with the training results.
Input
The first line contains two integers d and n, where d (1 ≤ d ≤ 500) is the number of dimensions of
the vectors, and n (1 ≤ n ≤ 500) is the number of training results for a deleted embedding vector you
want to reconstruct. Each of the next n lines describes a training result using d + 1 numbers x1 , . . . , xd
and e. In a training result, x1 , . . . , xd (−100 ≤ xi ≤ 100) are the coordinates of a known vector, and e
(0 ≤ e ≤ 5 000) is the Euclidean distance from that vector to the deleted one.
Your submission will be tested only with the sample inputs given below and inputs generated according
to the following procedure. First, d and n are chosen. Then, n input vectors and 1 output vector, each
with dimension d, are chosen at random. The d · (n + 1) coordinates are independent and uniformly
distributed in the interval [−100, 100]. Next, the Euclidean distance from each input vector to the output
vector is computed. Finally, the output vector is discarded. Calculations use double-precision floating-
point numbers. Numbers obtained using this procedure appear in the input with 17 digits after the
decimal point.
Output
Output d values, the coordinates of any vector that has the given distance to each training vector with an
absolute or relative error of at most 10−5 .
Sample Input 1 Sample Output 1
2 3 1.5 -2
0 0 2.5
3 0 2.5
1.5 0.5 2.5
Sample Input 2 Sample Output 2
2 2 1.414213562373 1.414213562373
0 0 2
4 -4 6
Sample Input 3 Sample Output 3
4 3 1 2 3 4
0 1 2 3 2
1 2 -1 7 5
1 0.3 3.4 1.2 3.3
Editorial
Rendered from the copied solution.tex file. The original TeX source remains
available below.
Key Observations
Let $y = x - p_1$. Then $\|y\| = e_1$.
For every $i \ge 2$, let $v_i = p_i - p_1$. Since $\|x-p_i\| = e_i$, we get \[ \|y - v_i\|^2 = e_i^2. \] Subtracting $\|y\|^2 = e_1^2$ removes the quadratic term: \[ 2\,v_i \cdot y = \|v_i\|^2 + e_1^2 - e_i^2. \] So all constraints except the first become linear equations in $y$.
The linear equations define an affine subspace. We need any point in that subspace whose norm is exactly $e_1$.
If we compute a basis of the nullspace of the linear system, then the point of minimum norm in the affine subspace is obtained by projecting any particular solution away from the nullspace.
The difference between $e_1^2$ and that minimum norm tells us how much freedom remains in a nullspace direction.
Algorithm
Build the linear system \[ v_i \cdot y = \frac{\|v_i\|^2 + e_1^2 - e_i^2}{2} \qquad (i = 2,\dots,n). \]
Run Gaussian elimination on the augmented matrix to obtain:
one particular solution $y_0$ (set all free variables to $0$),
one basis vector for each free variable in the nullspace.
Orthonormalize the nullspace basis with Gram--Schmidt.
Project $y_0$ away from the nullspace to get the minimum-norm solution $y_{\min}$.
If $\|y_{\min}\| = e_1$, we are done. Otherwise, add \[ \sqrt{e_1^2 - \|y_{\min}\|^2} \] times any unit nullspace vector. This stays inside the affine subspace and fixes the norm.
Output $x = p_1 + y$. enumerate
Correctness Proof
We prove that the algorithm returns the correct answer.
Lemma 1.
For every $i \ge 2$, the constraint $\|x-p_i\| = e_i$ is equivalent to the linear equation \[ v_i \cdot y = \frac{\|v_i\|^2 + e_1^2 - e_i^2}{2}, \] where $y=x-p_1$ and $v_i=p_i-p_1$.
Proof.
We have $\|y\|=\|x-p_1\|=e_1$ and $\|y-v_i\|=\|x-p_i\|=e_i$. Expanding $\|y-v_i\|^2=e_i^2$ gives \[ \|y\|^2 - 2v_i \cdot y + \|v_i\|^2 = e_i^2. \] Substituting $\|y\|^2=e_1^2$ and rearranging yields the stated linear equation. □
Lemma 2.
Every solution of the original problem is exactly a vector of the form $x=p_1+y$, where $y$ belongs to the affine subspace defined by the linear equations from Lemma 1 and also satisfies $\|y\|=e_1$.
Proof.
By Lemma 1, satisfying all distances except the first is equivalent to belonging to the affine subspace. The first distance is exactly $\|x-p_1\|=\|y\|=e_1$. Therefore the original system and the stated affine subspace plus norm condition are equivalent. □
Lemma 3.
Let $A$ be the linear system from Lemma 1. After Gaussian elimination, the algorithm constructs:
a particular solution $y_0$ of $Ay=b$,
a basis of the nullspace $\mathcal{N}=\{z:Az=0\}$.
Then every solution of $Ay=b$ is of the form $y_0+z$ with $z \in \mathcal{N}$.
Proof.
This is the standard characterization of solutions to a consistent linear system after row reduction. Free variables generate the nullspace, and setting them all to zero gives one particular solution. □
Lemma 4.
The vector $y_{\min}$ produced by projecting $y_0$ away from the nullspace is the minimum-norm solution of $Ay=b$.
Proof.
After orthonormalizing the nullspace basis, subtracting from $y_0$ its orthogonal projection onto the nullspace gives a vector $y_{\min}$ that is orthogonal to the nullspace. Any other solution has the form $y_{\min}+z$ with $z \in \mathcal{N}$ by Lemma 3, and orthogonality gives \[ \|y_{\min}+z\|^2 = \|y_{\min}\|^2 + \|z\|^2 \ge \|y_{\min}\|^2. \] So $y_{\min}$ is exactly the minimum-norm solution. □
Lemma 5.
If the affine subspace contains any vector of norm $e_1$, then the algorithm constructs one.
Proof.
By Lemma 4, every solution has norm at least $\|y_{\min}\|$. If $\|y_{\min}\|=e_1$, the algorithm already has a valid vector. Otherwise, because a valid vector exists, the affine subspace must have a nontrivial nullspace direction. Moving along any unit nullspace vector keeps us inside the affine subspace, and by orthogonality to $y_{\min}$ the norm becomes \[ \|y_{\min} + t u\|^2 = \|y_{\min}\|^2 + t^2. \] Choosing $t = \sqrt{e_1^2 - \|y_{\min}\|^2}$ gives norm exactly $e_1$. □
Theorem.
The algorithm outputs the correct answer for every valid input.
Proof.
By Lemma 2, it is enough to find a vector $y$ in the affine subspace with norm $e_1$. Lemmas 3, 4, and 5 show that the algorithm constructs such a vector, and therefore $x=p_1+y$ satisfies all required distances. □
Complexity Analysis
Gaussian elimination on at most $499$ equations and $500$ variables takes $O(nd^2)$ time here, and the nullspace orthonormalization costs $O(d^3)$ in the worst case. With $d,n \le 500$, this easily fits. The memory usage is $O(d^2)$.
Implementation Notes
Using `long double` keeps the numerical error comfortably below the $10^{-5}$ tolerance.
The generated test data is consistent, so we do not need to handle infeasible systems.
Small negative values caused by floating-point roundoff are clamped to zero before taking square roots.
Code
Exact copied C++ implementation from solution.cpp.
#include <bits/stdc++.h>
using namespace std;
namespace {
const long double EPS = 1e-12L;
long double dot_product(const vector<long double>& a, const vector<long double>& b) {
long double result = 0;
for (int i = 0; i < int(a.size()); ++i) {
result += a[i] * b[i];
}
return result;
}
void solve() {
int d, n;
cin >> d >> n;
vector<vector<long double>> points(n, vector<long double>(d));
vector<long double> dist(n);
for (int i = 0; i < n; ++i) {
for (int j = 0; j < d; ++j) {
cin >> points[i][j];
}
cin >> dist[i];
}
vector<vector<long double>> aug;
aug.reserve(max(0, n - 1));
for (int i = 1; i < n; ++i) {
vector<long double> row(d + 1, 0);
long double sq_norm = 0;
for (int j = 0; j < d; ++j) {
row[j] = points[i][j] - points[0][j];
sq_norm += row[j] * row[j];
}
row[d] = (sq_norm + dist[0] * dist[0] - dist[i] * dist[i]) / 2.0L;
aug.push_back(row);
}
vector<int> pivot_col;
int row = 0;
const int m = n - 1;
for (int col = 0; col < d && row < m; ++col) {
int pivot = row;
for (int i = row + 1; i < m; ++i) {
if (fabsl(aug[i][col]) > fabsl(aug[pivot][col])) {
pivot = i;
}
}
if (fabsl(aug[pivot][col]) <= EPS) {
continue;
}
swap(aug[row], aug[pivot]);
long double div = aug[row][col];
for (int j = col; j <= d; ++j) {
aug[row][j] /= div;
}
for (int i = 0; i < m; ++i) {
if (i == row || fabsl(aug[i][col]) <= EPS) {
continue;
}
long double factor = aug[i][col];
for (int j = col; j <= d; ++j) {
aug[i][j] -= factor * aug[row][j];
}
}
pivot_col.push_back(col);
++row;
}
int rank = row;
vector<int> is_pivot(d, 0);
for (int col : pivot_col) {
is_pivot[col] = 1;
}
vector<long double> y(d, 0);
for (int i = 0; i < rank; ++i) {
y[pivot_col[i]] = aug[i][d];
}
vector<vector<long double>> null_basis;
for (int free_col = 0; free_col < d; ++free_col) {
if (is_pivot[free_col]) {
continue;
}
vector<long double> vec(d, 0);
vec[free_col] = 1;
for (int i = 0; i < rank; ++i) {
vec[pivot_col[i]] = -aug[i][free_col];
}
null_basis.push_back(vec);
}
vector<vector<long double>> ortho_basis;
for (vector<long double> vec : null_basis) {
for (const vector<long double>& q : ortho_basis) {
long double proj = dot_product(vec, q);
for (int j = 0; j < d; ++j) {
vec[j] -= proj * q[j];
}
}
long double norm = sqrtl(max((long double)0, dot_product(vec, vec)));
if (norm <= EPS) {
continue;
}
for (int j = 0; j < d; ++j) {
vec[j] /= norm;
}
ortho_basis.push_back(vec);
}
for (const vector<long double>& q : ortho_basis) {
long double proj = dot_product(y, q);
for (int j = 0; j < d; ++j) {
y[j] -= proj * q[j];
}
}
long double current_norm_sq = dot_product(y, y);
long double remaining_sq = dist[0] * dist[0] - current_norm_sq;
if (remaining_sq < 0 && remaining_sq > -1e-8L) {
remaining_sq = 0;
}
if (remaining_sq > 1e-12L && !ortho_basis.empty()) {
long double extra = sqrtl(remaining_sq);
for (int j = 0; j < d; ++j) {
y[j] += extra * ortho_basis[0][j];
}
}
cout << fixed << setprecision(12);
for (int j = 0; j < d; ++j) {
if (j) {
cout << ' ';
}
cout << double(points[0][j] + y[j]);
}
cout << '\n';
}
} // namespace
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
solve();
return 0;
}
Source Files
Exact copied source-of-truth files. Edit solution.tex for the write-up and solution.cpp for the implementation.
\documentclass[11pt]{article}
\usepackage[margin=1in]{geometry}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{amsmath,amssymb,amsthm}
\usepackage{enumitem}
\title{ICPC World Finals 2020\\N. What’s Our Vector, Victor?}
\author{}
\date{}
\begin{document}
\maketitle
\section*{Problem Summary}
We are given $n$ known vectors $p_1,\dots,p_n \in \mathbb{R}^d$ and the Euclidean distances from an
unknown vector $x$ to each of them. We must output any vector $x$ consistent with all those distances.
The hidden test data is generated from an actual vector, so a solution always exists.
\section*{Key Observations}
\begin{itemize}[leftmargin=*]
\item Let $y = x - p_1$. Then $\|y\| = e_1$.
\item For every $i \ge 2$, let $v_i = p_i - p_1$. Since $\|x-p_i\| = e_i$, we get
\[
\|y - v_i\|^2 = e_i^2.
\]
Subtracting $\|y\|^2 = e_1^2$ removes the quadratic term:
\[
2\,v_i \cdot y = \|v_i\|^2 + e_1^2 - e_i^2.
\]
So all constraints except the first become linear equations in $y$.
\item The linear equations define an affine subspace. We need any point in that subspace whose norm is
exactly $e_1$.
\item If we compute a basis of the nullspace of the linear system, then the point of minimum norm in the
affine subspace is obtained by projecting any particular solution away from the nullspace.
\item The difference between $e_1^2$ and that minimum norm tells us how much freedom remains in a
nullspace direction.
\end{itemize}
\section*{Algorithm}
\begin{enumerate}[leftmargin=*]
\item Build the linear system
\[
v_i \cdot y = \frac{\|v_i\|^2 + e_1^2 - e_i^2}{2}
\qquad (i = 2,\dots,n).
\]
\item Run Gaussian elimination on the augmented matrix to obtain:
\begin{itemize}[leftmargin=*]
\item one particular solution $y_0$ (set all free variables to $0$),
\item one basis vector for each free variable in the nullspace.
\end{itemize}
\item Orthonormalize the nullspace basis with Gram--Schmidt.
\item Project $y_0$ away from the nullspace to get the minimum-norm solution $y_{\min}$.
\item If $\|y_{\min}\| = e_1$, we are done.
Otherwise, add
\[
\sqrt{e_1^2 - \|y_{\min}\|^2}
\]
times any unit nullspace vector. This stays inside the affine subspace and fixes the norm.
\item Output $x = p_1 + y$.
\end{enumerate}
\section*{Correctness Proof}
We prove that the algorithm returns the correct answer.
\paragraph{Lemma 1.}
For every $i \ge 2$, the constraint $\|x-p_i\| = e_i$ is equivalent to the linear equation
\[
v_i \cdot y = \frac{\|v_i\|^2 + e_1^2 - e_i^2}{2},
\]
where $y=x-p_1$ and $v_i=p_i-p_1$.
\paragraph{Proof.}
We have $\|y\|=\|x-p_1\|=e_1$ and $\|y-v_i\|=\|x-p_i\|=e_i$.
Expanding $\|y-v_i\|^2=e_i^2$ gives
\[
\|y\|^2 - 2v_i \cdot y + \|v_i\|^2 = e_i^2.
\]
Substituting $\|y\|^2=e_1^2$ and rearranging yields the stated linear equation. \qed
\paragraph{Lemma 2.}
Every solution of the original problem is exactly a vector of the form $x=p_1+y$, where $y$ belongs to
the affine subspace defined by the linear equations from Lemma 1 and also satisfies $\|y\|=e_1$.
\paragraph{Proof.}
By Lemma 1, satisfying all distances except the first is equivalent to belonging to the affine subspace.
The first distance is exactly $\|x-p_1\|=\|y\|=e_1$. Therefore the original system and the stated affine
subspace plus norm condition are equivalent. \qed
\paragraph{Lemma 3.}
Let $A$ be the linear system from Lemma 1. After Gaussian elimination, the algorithm constructs:
\begin{itemize}[leftmargin=*]
\item a particular solution $y_0$ of $Ay=b$,
\item a basis of the nullspace $\mathcal{N}=\{z:Az=0\}$.
\end{itemize}
Then every solution of $Ay=b$ is of the form $y_0+z$ with $z \in \mathcal{N}$.
\paragraph{Proof.}
This is the standard characterization of solutions to a consistent linear system after row reduction. Free
variables generate the nullspace, and setting them all to zero gives one particular solution. \qed
\paragraph{Lemma 4.}
The vector $y_{\min}$ produced by projecting $y_0$ away from the nullspace is the minimum-norm solution
of $Ay=b$.
\paragraph{Proof.}
After orthonormalizing the nullspace basis, subtracting from $y_0$ its orthogonal projection onto the
nullspace gives a vector $y_{\min}$ that is orthogonal to the nullspace.
Any other solution has the form $y_{\min}+z$ with $z \in \mathcal{N}$ by Lemma 3, and orthogonality gives
\[
\|y_{\min}+z\|^2 = \|y_{\min}\|^2 + \|z\|^2 \ge \|y_{\min}\|^2.
\]
So $y_{\min}$ is exactly the minimum-norm solution. \qed
\paragraph{Lemma 5.}
If the affine subspace contains any vector of norm $e_1$, then the algorithm constructs one.
\paragraph{Proof.}
By Lemma 4, every solution has norm at least $\|y_{\min}\|$.
If $\|y_{\min}\|=e_1$, the algorithm already has a valid vector.
Otherwise, because a valid vector exists, the affine subspace must have a nontrivial nullspace direction.
Moving along any unit nullspace vector keeps us inside the affine subspace, and by orthogonality to
$y_{\min}$ the norm becomes
\[
\|y_{\min} + t u\|^2 = \|y_{\min}\|^2 + t^2.
\]
Choosing $t = \sqrt{e_1^2 - \|y_{\min}\|^2}$ gives norm exactly $e_1$. \qed
\paragraph{Theorem.}
The algorithm outputs the correct answer for every valid input.
\paragraph{Proof.}
By Lemma 2, it is enough to find a vector $y$ in the affine subspace with norm $e_1$.
Lemmas 3, 4, and 5 show that the algorithm constructs such a vector, and therefore $x=p_1+y$ satisfies
all required distances. \qed
\section*{Complexity Analysis}
Gaussian elimination on at most $499$ equations and $500$ variables takes $O(nd^2)$ time here, and the
nullspace orthonormalization costs $O(d^3)$ in the worst case. With $d,n \le 500$, this easily fits.
The memory usage is $O(d^2)$.
\section*{Implementation Notes}
\begin{itemize}[leftmargin=*]
\item Using `long double` keeps the numerical error comfortably below the $10^{-5}$ tolerance.
\item The generated test data is consistent, so we do not need to handle infeasible systems.
\item Small negative values caused by floating-point roundoff are clamped to zero before taking square
roots.
\end{itemize}
\end{document}
#include <bits/stdc++.h>
using namespace std;
namespace {
const long double EPS = 1e-12L;
long double dot_product(const vector<long double>& a, const vector<long double>& b) {
long double result = 0;
for (int i = 0; i < int(a.size()); ++i) {
result += a[i] * b[i];
}
return result;
}
void solve() {
int d, n;
cin >> d >> n;
vector<vector<long double>> points(n, vector<long double>(d));
vector<long double> dist(n);
for (int i = 0; i < n; ++i) {
for (int j = 0; j < d; ++j) {
cin >> points[i][j];
}
cin >> dist[i];
}
vector<vector<long double>> aug;
aug.reserve(max(0, n - 1));
for (int i = 1; i < n; ++i) {
vector<long double> row(d + 1, 0);
long double sq_norm = 0;
for (int j = 0; j < d; ++j) {
row[j] = points[i][j] - points[0][j];
sq_norm += row[j] * row[j];
}
row[d] = (sq_norm + dist[0] * dist[0] - dist[i] * dist[i]) / 2.0L;
aug.push_back(row);
}
vector<int> pivot_col;
int row = 0;
const int m = n - 1;
for (int col = 0; col < d && row < m; ++col) {
int pivot = row;
for (int i = row + 1; i < m; ++i) {
if (fabsl(aug[i][col]) > fabsl(aug[pivot][col])) {
pivot = i;
}
}
if (fabsl(aug[pivot][col]) <= EPS) {
continue;
}
swap(aug[row], aug[pivot]);
long double div = aug[row][col];
for (int j = col; j <= d; ++j) {
aug[row][j] /= div;
}
for (int i = 0; i < m; ++i) {
if (i == row || fabsl(aug[i][col]) <= EPS) {
continue;
}
long double factor = aug[i][col];
for (int j = col; j <= d; ++j) {
aug[i][j] -= factor * aug[row][j];
}
}
pivot_col.push_back(col);
++row;
}
int rank = row;
vector<int> is_pivot(d, 0);
for (int col : pivot_col) {
is_pivot[col] = 1;
}
vector<long double> y(d, 0);
for (int i = 0; i < rank; ++i) {
y[pivot_col[i]] = aug[i][d];
}
vector<vector<long double>> null_basis;
for (int free_col = 0; free_col < d; ++free_col) {
if (is_pivot[free_col]) {
continue;
}
vector<long double> vec(d, 0);
vec[free_col] = 1;
for (int i = 0; i < rank; ++i) {
vec[pivot_col[i]] = -aug[i][free_col];
}
null_basis.push_back(vec);
}
vector<vector<long double>> ortho_basis;
for (vector<long double> vec : null_basis) {
for (const vector<long double>& q : ortho_basis) {
long double proj = dot_product(vec, q);
for (int j = 0; j < d; ++j) {
vec[j] -= proj * q[j];
}
}
long double norm = sqrtl(max((long double)0, dot_product(vec, vec)));
if (norm <= EPS) {
continue;
}
for (int j = 0; j < d; ++j) {
vec[j] /= norm;
}
ortho_basis.push_back(vec);
}
for (const vector<long double>& q : ortho_basis) {
long double proj = dot_product(y, q);
for (int j = 0; j < d; ++j) {
y[j] -= proj * q[j];
}
}
long double current_norm_sq = dot_product(y, y);
long double remaining_sq = dist[0] * dist[0] - current_norm_sq;
if (remaining_sq < 0 && remaining_sq > -1e-8L) {
remaining_sq = 0;
}
if (remaining_sq > 1e-12L && !ortho_basis.empty()) {
long double extra = sqrtl(remaining_sq);
for (int j = 0; j < d; ++j) {
y[j] += extra * ortho_basis[0][j];
}
}
cout << fixed << setprecision(12);
for (int j = 0; j < d; ++j) {
if (j) {
cout << ' ';
}
cout << double(points[0][j] + y[j]);
}
cout << '\n';
}
} // namespace
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
solve();
return 0;
}