diff --git a/notebooks/Block_7/Jupyter Notebook Block 7 - Generative Models - DeepDream, Neural Style Transfer, and GAN's.ipynb b/notebooks/Block_7/Jupyter Notebook Block 7 - Generative Models - DeepDream, Neural Style Transfer, and GAN's.ipynb
index 9c9748f7b060e60ce8b9e4a18dc14e269f891f54..3bb0f48e35af17a8e8e734b671decfc743323fc5 100644
--- a/notebooks/Block_7/Jupyter Notebook Block 7 - Generative Models - DeepDream, Neural Style Transfer, and GAN's.ipynb	
+++ b/notebooks/Block_7/Jupyter Notebook Block 7 - Generative Models - DeepDream, Neural Style Transfer, and GAN's.ipynb	
@@ -452,8 +452,8 @@
     "\n",
     "In this context, _style_ essentially means textures, colors, and visual patterns in the image, at\n",
     "various spatial scales; and the _content_ is the higher-level macrostructure of the image.\n",
-    "For instance, blue-and-yellow circular brushstrokes are considered to be the style in figure\n",
-    "8.7 (using _Starry Night_ by Vincent Van Gogh), and the buildings in the Tübingen\n",
+    "For instance, blue-and-yellow circular brushstrokes are considered to be the style in the Figure\n",
+    "above (using _Starry Night_ by Vincent Van Gogh), and the buildings in the Tübingen\n",
     "photograph are considered to be the content.\n",
     "\n",
     "The idea of style transfer, which is tightly related to that of texture generation, has\n",
@@ -475,8 +475,8 @@
    "cell_type": "raw",
    "metadata": {},
    "source": [
-    "loss = distance(style(reference_image) - style(generated_image)) +\n",
-    "distance(content(original_image) - content(generated_image))"
+    "loss = distance(style(reference_image) , style(generated_image)) +\n",
+    "distance(content(original_image) , content(generated_image))"
    ]
   },
   {
@@ -922,7 +922,7 @@
    "source": [
     "Finally, you’ll set up the gradient-descent process. In the original Gatys et al. paper,\n",
     "optimization is performed using the L-BFGS algorithm, so that’s what you’ll use here.\n",
-    "This is a key difference from the DeepDream example in section 8.2. The L-BFGS algorithm\n",
+    "This is a key difference from the DeepDream example in the previous section. The L-BFGS algorithm\n",
     "comes packaged with SciPy, but there are two slight limitations with the SciPy\n",
     "implementation:\n",
     "- It requires that you pass the value of the loss function and the value of the gradients\n",
@@ -2365,7 +2365,27 @@
       "discriminator loss: 0.7263772487640381\n",
       "adversarial loss: 0.7299085855484009\n",
       "discriminator loss: 0.6585697531700134\n",
-      "adversarial loss: 0.9046730995178223\n"
+      "adversarial loss: 0.9046730995178223\n",
+      "discriminator loss: 0.6825298070907593\n",
+      "adversarial loss: 0.7765655517578125\n",
+      "discriminator loss: 0.6616697311401367\n",
+      "adversarial loss: 0.8104211091995239\n",
+      "discriminator loss: 0.7299661636352539\n",
+      "adversarial loss: 1.2298979759216309\n",
+      "discriminator loss: 0.699345052242279\n",
+      "adversarial loss: 0.878213107585907\n",
+      "discriminator loss: 0.6551450490951538\n",
+      "adversarial loss: 0.7156903743743896\n",
+      "discriminator loss: 0.6734248995780945\n",
+      "adversarial loss: 0.7619784474372864\n",
+      "discriminator loss: 0.7167571187019348\n",
+      "adversarial loss: 0.8063276410102844\n",
+      "discriminator loss: 0.6934176683425903\n",
+      "adversarial loss: 0.9151533246040344\n",
+      "discriminator loss: 1.1302387714385986\n",
+      "adversarial loss: 0.9292305707931519\n",
+      "discriminator loss: 0.7230756878852844\n",
+      "adversarial loss: 0.7185141444206238\n"
      ]
     }
    ],