DSU on Trees #1 Merging subsets on Tree

Hi, this post involves a fairly common technique involved in problems on Trees.

The technique is called DSU on Trees – “Small to Large

The idea is used in problems that ask us to answer queries involving count of vertices in the subtree of a node. For example, each node is coloured , find the number of nodes in subtree of v that has color c.

The Approach is to keep track of the frequency map of each color for each children vertex and merge the maps to get the answer for the parent vertex. The merging process should follow small to large trick to reduce the merging time complexity from O(n*n*logn) to O(n*logn*logn) for entire process.

Problem 01 – E. Lomsat gelral

Problem Link – https://codeforces.com/contest/600/problem/E

This problem is the straight forward application of the above technique. It can be solved in variety of ways, but the DSU technique is used in solving more category of problems, so we will be looking at the implementation using above trick.

I have created col_occ map array, which will store the frequency of each color for each and every node. Apart from that, I have also created maxfreq array which will store the maximum frequency that gives answer for each node, and obviously I have ans array that stores answer for each node.

The idea is – we apply dfs from root node, we call for the children, once called, we check if the map size of child is greater than our size, if yes, we swap the maps. Then we merge the maps, obviously , we will add the frequencies, and at each color being added, we check if it is possible to get a better frequency from the one we already got from the biggest son so far. If yes, we update it.

#include<bits/stdc++.h>
using namespace std;
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
using namespace __gnu_pbds;
#define M1 					1000000007
#define M2 					998244353
#define ll 					long long int
#define pll 				pair<ll,ll>
#define mll 				map<ll,ll>
#define F 					first
#define S 					second
#define PB 					push_back
#define mp 					make_pair
#define lb 					lower_bound
#define ub 					upper_bound
#define V(a) 				vector<a>
#define endl 				'\n'
#define test(t) 			while(t--)
#define PI 					acos(-1.0)
#define rep(i,a,b) 			for(ll i=a;i<b;i++)
#define repp(i,b,a) 		for(ll i=b-1;i>=a;i--)
#define clr(ar, val)		memset(ar, val, sizeof(ar))
#define setbits(x) 			__builtin_popcountll(x)
#define zrobits(x) 			__builtin_ctzll(x)
#define ps(y) 				fixed << setprecision(y)
#define all(x) 				begin(x),end(x)
#define allr(x) 			rbegin(x),rend(x)
const int inf= 				0x3f3f3f3f;
const ll INF= 				0x3f3f3f3f3f3f3f3f;
const int dx[4]= 			{ 0, -1, 0, 1 };
const int dy[4]= 			{ -1, 0, 1, 0 };
const int MAXN = 1e5 + 5;

vector<ll> adj[MAXN];
vector<ll> col(MAXN);
ll n;
map<ll,ll> col_occ[MAXN];
ll ans[MAXN];
ll maxfreq[MAXN];
void dfs(ll u, ll p){
    ll max_occ = 1;
    col_occ[u][col[u]] = 1;
    ll curr_sum = col[u];
    for(ll v:adj[u]){
        if (v == p) continue;
        dfs(v,u);
        if (col_occ[v].size() > col_occ[u].size()){
            // At each child, swap the maps if child has bigger one.
            swap(col_occ[v],col_occ[u]);
            curr_sum = ans[v];
            max_occ = maxfreq[v];
        }
        // Now, merge the smaller map to bigger map.
        for(auto i:col_occ[v]){
            col_occ[u][i.F] += i.S;
            if (col_occ[u][i.F] > max_occ){
                max_occ = col_occ[u][i.F];
                curr_sum = i.F;
            }
            else if (col_occ[u][i.F] == max_occ){
                curr_sum += i.F;
            }
        }
    }
    maxfreq[u] = max_occ;
    ans[u] = curr_sum;
}
void solve()
{
	
	cin >> n;
	rep(i,1,n+1) cin >> col[i]; 
    rep(i,0,n-1){
        ll x,y;
        cin >> x >> y;
        adj[x].PB(y);
        adj[y].PB(x);
    } 
    dfs(1,-1);
    rep(i,1,n+1) cout << ans[i] << endl;
}

int32_t main(){
    ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0);
    int t=1;
    // cin>>t;
    test(t){
        solve();
    }
    return 0;
}

What to try next ?

Leave a Reply