Using Python's built-in defaultdict we can easily define a tree data structure:
def tree(): return defaultdict(tree)That's it!
| import base64 | |
| import random | |
| import cv2 | |
| import torch | |
| import torchvision | |
| def svg(points, labels, thumbnails, legend_size = 1e-1, legend_font_size = 5e-2, circle_radius = 5e-3): | |
| points = (points - points.min(0)[0]) / (points.max(0)[0] - points.min(0)[0]) | |
| class_index = sorted(set(labels)) | |
| class_colors = [360.0 * i / len(class_index) for i in range(len(class_index))] |
| class AttentionLSTM(LSTM): | |
| """LSTM with attention mechanism | |
| This is an LSTM incorporating an attention mechanism into its hidden states. | |
| Currently, the context vector calculated from the attended vector is fed | |
| into the model's internal states, closely following the model by Xu et al. | |
| (2016, Sec. 3.1.2), using a soft attention model following | |
| Bahdanau et al. (2014). | |
| The layer expects two inputs instead of the usual one: |
| import cv2 | |
| import numpy as np | |
| def in_front_of_both_cameras(first_points, second_points, rot, trans): | |
| # check if the point correspondences are in front of both images | |
| rot_inv = rot | |
| for first, second in zip(first_points, second_points): | |
| first_z = np.dot(rot[0, :] - second[0]*rot[2, :], trans) / np.dot(rot[0, :] - second[0]*rot[2, :], second) | |
| first_3d_point = np.array([first[0] * first_z, second[0] * first_z, first_z]) |
Using Python's built-in defaultdict we can easily define a tree data structure:
def tree(): return defaultdict(tree)That's it!